Saturday, November 03, 2012

Windows Azure and Cloud Computing Posts for 10/29/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

‡‡ Updated 11/3/2012 with new articles marked ‡‡.
‡   Updated 11/2/2012 with new articles marked .
•• Updated 11/1/2012 with new articles marked ••.
•   Updated 10/31/2012 with new articles marked .

Tip: Copy bullet(s) or dagger(s), press Ctrl+f, paste it/them to the Find textbox and click Next to locate updated articles:

image

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue, Hadoop and Media Services

‡ Brad Calder (@CalderBrad, pictured below) and Aaron Ogus of the Windows Azure Storage Team described Windows Azure’s Flat Network Storage and 2012 Scalability Targets in an 11/2/2012 post:

imageEarlier this year, we deployed a flat network for Windows Azure across all of our datacenters to create Flat Network Storage (FNS) for Windows Azure Storage. We used a flat network design in order to provide very high bandwidth network connectivity for storage clients. This new network design and resulting bandwidth improvements allows us to support Windows Azure Virtual Machines, where we store VM persistent disks as durable network attached blobs in Windows Azure Storage. Additionally, the new network design enables scenarios such as MapReduce and HPC that can require significant bandwidth between compute and storage.

imageFrom the start of Windows Azure, we decided to separate customer VM-based computation from storage, allowing each of them to scale independently, making it easier to provide multi-tenancy, and making it easier to provide isolation. To make this work for the scenarios we need to address, a quantum leap in network scale and throughput was required. This resulted in FNS, where the Windows Azure Networking team (under Albert Greenberg) along with the Windows Azure Storage, Fabric and OS teams made and deployed several hardware and software networking improvements.

image222The changes to new storage hardware and to a high bandwidth network comprise the significant improvements in our second generation storage (Gen 2), when compared to our first generation (Gen 1) hardware, as outlined below:

image

The deployment of our Gen 2 SKU, along with software improvements, provides significant bandwidth between compute and storage using a flat network topology. The specific implementation of our flat network for Windows Azure is referred to as the “Quantum 10” (Q10) network architecture. Q10 provides a fully non-blocking 10Gbps based fully meshed network, providing an aggregate backplane in excess of 50 Tbps of bandwidth for each Windows Azure datacenter. Another major improvement in reliability and throughput is moving from a hardware load balancer to a software load balancer. Then the storage architecture and design described here, has been tuned to fully leverage the new Q10 network to provide flat network storage for Windows Azure Storage.

With these improvements, we are pleased to announce an increase in the scalability targets for Windows Azure Storage, where all new storage accounts are created on the Gen 2 hardware SKU. These new scalability targets apply to all storage accounts created after June 7th, 2012. Storage accounts created before this date have the prior scalability targets described here. Unfortunately, we do not offer the ability to migrate storage accounts, so only storage accounts created after June 7th, 2012 have these new scalability targets.

To find out the creation date of your storage account, you can go to the new portal, click on the storage account, and see the creation date on the right in the quick glance section as shown below:

accountcreationdate

Storage Account Scalability Targets

By the end of 2012, we will have finished rolling out the software improvements for our flat network design. This will provide the following scalability targets for a single storage account created after June 7th 2012.

  • Capacity – Up to 200 TBs
  • Transactions – Up to 20,000 entities/messages/blobs per second
  • Bandwidth for a Geo Redundant storage account
    • Ingress - up to 5 gigabits per second
    • Egress - up to 10 gigabits per second
  • Bandwidth for a Locally Redundant storage account
    • Ingress - up to 10 gigabits per second
    • Egress - up to 15 gigabits per second

Storage accounts have geo-replication on by default to provide what we call Geo Redundant Storage. Customers can turn geo-replication off to use what we call Locally Redundant Storage, which results in a discounted price relative to Geo Redundant Storage and higher ingress and egress targets (by end of 2012) as described above. For more information on Geo Redundant Storage and Locally Redundant Storage, please see here.

Note, the actual transaction and bandwidth targets achieved by your storage account will very much depend upon the size of objects, access patterns, and the type of workload your application exhibits. To go above these targets, a service should be built to use multiple storage accounts, and partition the blob containers, tables and queues and objects across those storage accounts. By default, a single Windows Azure subscription gets 5 storage accounts. However, you can contact customer support to get more storage accounts if you need to store more than that (e.g., petabytes) of data.

Partition Scalability Targets

Within a storage account, all of the objects are grouped into partitions as described here. Therefore, it is important to understand the performance targets of a single partition for our storage abstractions, which are (the below Queue and Table throughputs were achieved using an object size of 1KB):

  • Single Queue– all of the messages in a queue are accessed via a single queue partition. A single queue is targeted to be able to process:
    • Up to 2,000 messages per second
  • Single Table Partition– a table partition are all of the entities in a table with the same partition key value, and usually tables have many partitions. The throughput target for a single table partition is:
    • Up to 2,000 entities per second
    • Note, this is for a single partition, and not a single table. Therefore, a table with good partitioning, can process up to the 20,000 entities/second, which is the overall account target described above.
  • Single Blob– the partition key for blobs is the “container name + blob name”, therefore we can partition blobs down to a single blob per partition to spread out blob access across our servers. The target throughput of a single blob is:
    • Up to 60 MBytes/sec

The above throughputs are the high end targets. What can be achieved by your application very much depends upon the size of the objects being accessed, the operation types (workload) and the access patterns. We encourage all services to test the performance at the partition level for their workload.

When your application reaches the limit to what a partition can handle for your workload, it will start to get back “503 Server Busy” or “500 Operation Timeout” responses. When this occurs, the application should use exponential backoff for retries. The exponential backoff allows the load on the partition to decrease, and to ease out spikes in traffic to that partition.

In summary, we are excited to announce our first step towards providing flat network storage. We plan to continue to invest in improving bandwidth between compute and storage as well as increase the scalability targets of storage accounts and partitions over time.


•• Benjamin Guinebertiere (@benjguin) described Installing HDInsight (Hadoop) [on-premises verion] on a single Windows box in a 10/31 bilingual post. From the English version:

imageAnnounced at the //build conference, HDInsight is available as a Web Platform Installer installation. This allows to have Hadoop on a Windows box (like a laptop) without requiring cygwin.

Let’s see how to install this from a blank Windows Server 2012 server (example).

image

image

Go to http://www.microsoft.com/web

image

image

image

image

This installed the Web Platform Installer

Type HDInsight in the search box and press ENTER

image

image

image

image

image

image

image

image

image

image


• Nagarjun Guraja of the Windows Azure Storage Team described Windows Azure Storage Emulator 1.8 in a 10/30/2012 post:

imageIn our continuous endeavor to enrich the development experience, we are extremely pleased to announce the new Storage Emulator, which has much improved parity with the Windows Azure Storage cloud service.

What is Storage Emulator?

Storage Emulator emulates the Windows Azure Storage blob, table and queue cloud services on local machine which helps developers in getting started and basic testing of their storage applications locally without incurring the cost associated with cloud service. This version of Windows Azure Storage emulator supports Blob, Tables and Queues up until REST version 2012-02-12.

How it works?

emulator

Storage Emulator exposes different HTTP end points (port numbers: 10000 for blob, 10001 for queue and 10002 for table services) on local host to receive and serve storage requests. Upon receiving a request, the emulator validates the request for its correctness, authenticates it, authorizes (if necessary) it, works with the data in SQL tables and file system and finally sends a response to the client.

Delving deeper into the internals, Storage Emulator efficiently stores the data associated with queues and tables in SQL tables. However, for blobs, it stores the metadata in SQL tables and actual data on local disk one file for each blob, for better performance. When deleting blobs, the Storage Emulator does not synchronously clean up unreferenced blob data on disk while performing blob operations. Instead it compacts and garbage collects such data in the background for better scalability and concurrency.

Storage Emulator Dependencies:
  • SQL Express or LocalDB
  • .NET 4.0 or later with SQL Express or .NET 4.0.2 or later with LocalDB

Installing Storage Emulator

Storage Emulator can work with LocalDB, SQL express or even a full blown SQL server as its SQL store.

The following steps would help in getting started with emulator using LocalDB.

  1. Install .NET framework 4.5 from here.
  2. Install X64 or X86 LocalDB from here.
  3. Install the Windows Azure Emulator from here.

Alternatively, if you have storage emulator 1.7 installed, you can do an in place update to the existing emulator. Please note that storage emulator 1.8 uses a new SQL schema and hence a DB reset is required for doing an in place update, which would result in loss of your existing data.

The following steps would help in performing an in place update.

  1. Shutdown the storage emulator, if running
  2. Replace the binaries ‘Microsoft.WindowsAzure.DevelopmentStorage.Services.dll’, ‘Microsoft.WindowsAzure.DevelopmentStorage.Store.dll’ and ‘Microsoft.WindowsAzure.DevelopmentStorage.Storev4.0.2.dll’, located at storage emulator installation path (Default path is "%systemdrive%\Program Files\Microsoft SDKs\Windows Azure\Emulator\devstore") with those available here.
  3. Open up the command prompt in admin mode and run ‘dsinit /forceCreate’ to recreate the DB. You can find the ‘dsinit’ tool at the storage emulator installation path.
  4. Start the storage emulator
What’s new in 1.8?

Storage emulator 1.8 supports the REST version 2012-02-12, along with earlier versions. Below are the service specific enhancements.

Blob Service Enhancements:

In 2012-02-12 REST version, Windows Azure Storage cloud service introduced support for container leases, improved blob leases and asynchronous copy blob across different storage accounts. Also, there were enhancements for blob shared access signatures and blob leases in the 2012-02-12 version. All those new features are supported in Storage Emulator 1.8.

Since the emulator has just one built in account, one can initiate cross account copy blob by providing a valid cloud based URL. Emulator serves such cross account copy blob requests, asynchronously, by downloading the blob data, in chunks of 4MB, and updating the copy status.

To know more about the new features in general, the following links would be helpful:

Storage Emulator 1.8 also garbage collects the unreferenced page blob files which may be produced as a result of delete blob requests, failed copy blob requests etc.

Queue Service Enhancements:

In 2012-02-12 REST version, Windows Azure Storage cloud service introduced support for Queue shared access signatures (SAS). Storage Emulator 1.8 supports Queue SAS.

Table Service Enhancements:

In 2012-02-12 REST version, Windows Azure Storage cloud service introduced support for table shared access signatures (SAS). Storage Emulator 1.8 supports Table SAS.

In order to achieve full parity with Windows Azure Storage table service APIs, the table service in emulator is completely rewritten from scratch to support truly schema less tables and expose data for querying and updating using ODATA protocol. As a result, Storage Emulator 1.8 fully supports the below table operations which were not supported in Emulator 1.7.

  • Query Projection: You can read more about it here.
  • Upsert operations: You can read more about it here.
Known Issues/Limitations
  • The storage emulator supports only a single fixed account and a well-known authentication key. They are: Account name: devstoreaccount1, Account key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
  • The URI scheme supported by the storage emulator differs from the URI scheme supported by the cloud storage services. The development URI scheme specifies the account name as part of the hierarchical path of the URI, rather than as part of the domain name. This difference is due to the fact that domain name resolution is available in the cloud but not on the local computer. For more information about URI differences in the development and production environments, see “Using storage service URIs” section in Overview of Running a Windows Azure Application with the Storage Emulator.
  • The storage emulator does not support Set Blob Service Properties or SetServiceProperties for blob, queue and table services.
  • Date properties in the Table service in the storage emulator support only the range supported by SQL Server 2005 (For example, they are required to be later than January 1, 1753). All dates before January 1, 1753 are changed to this value. The precision of dates is limited to the precision of SQL Server 2005, meaning that dates are precise to 1/300th of a second.
  • The storage emulator supports partition key and row key property values of less than 900 bytes. The total size of the account name, table name, and key property names together cannot exceed 900 bytes.
  • The storage emulator does not validate that the size of a batch in an entity group transaction is less than 4 MB. Batches are limited to 4 MB in Windows Azure, so you must ensure that a batch does not exceed this size before transitioning to the Windows Azure storage services.
  • Avoid using ‘PartitionKey’ or ‘RowKey’ that contains ‘%’ character due to the double decoding bug
  • Get messages from queue might not return messages in the strict increasing order of message’s ‘Insertion TimeStamp’ + ‘visibilitytimeout’
Summary

Storage Emulator 1.8 has a great extent of parity with the Windows Azure Storage cloud service in terms of API support and usability and we will continue to improve it. We hope you all like it and please share your feedback with us to make it better.


• The SQL Server Team (@SQLServer) posted a Strata-Hadoop World 2012 Recap on 10/30/2012:

imageWe just wrapped up a busy week at Strata-Hadoop World, one of the predominant Big Data conferences. Microsoft showed up in force, with the big news being our press announcement around the availability of public previews for Windows Azure HDInsight Service and Microsoft HDInsight Server for Windows, our 100 percent Apache Hadoop compatible solution for Windows Azure and Windows Server.

imageWith HDInsight, Microsoft truly democratizes Big Data by opening up Hadoop to the Windows ecosystem. When you combine HDInsight with curated datasets from Windows Azure Data Market and powerful self-service BI tools like Excel Power View, it’s easy to see why customers can count on the comprehensiveness and simplicity of Microsoft’s enterprise-grade Big Data solution.

We kicked off the conference with Shawn Bice and Mike Flasko’s session Drive Smarter Decisions with Microsoft Big Data, which offered attendees a hands-on look at how simple it is to deploy and manage Hadoop clusters on Windows; the demo combined corporate data with curated datasets from Windows Azure Data Market and then analyzed and visualized this data with Excel Power View (you can watch Mike deliver a repeat performance of his session demos in this SiliconAngle interview). For folks that want to hear even more, Mike will be delivering a webcast later this month that dives into the business value that Microsoft Big Data unlocks, including more demos of HDInisght in action.

Matt Winkler then gave a session on the .NET and JavaScript frameworks for Hadoop, which enable millions of additional developers to program Hadoop jobs. These frameworks further Microsoft’s mission of bringing big data to the masses; learn more and download the .NET SDK here.

Roger Barga and Dipanjan Banik closed the conference with their editorial session Predictive Modeling & Operational Analytics over Streaming Data, discussing how to leverage the StreamInsight platform for operational analytics. By using a “Monitor, Manage and Mine” architectural pattern, businesses are able to store/process event streams, analyze them and produce predictive models that can then be installed directly into event processing services to predict real-time alerts/actions.

We had the opportunity to talk with hundreds of customers who were very excited to hear about our Big Data vision and see the concrete progress we’ve made with HDInsight. Be sure to join Microsoft at the Strata Santa Clara conference in February, where we’ll have much more to share!

Learn more about Microsoft Big Data and test out the HDInsight previews for yourself at microsoft.com/bigdata.


Joe Giardino, Serdar Ozler, Justin Yu and Veena Udayabhanu of the Windows Azure Storage Team posted Introducing Windows Azure Storage Client Library 2.0 for .NET and Windows Runtime on 10/29/2012:

imageToday we are releasing version 2.0 of the Windows Azure Storage Client Library. This is our largest update to our .NET library to date which includes new features, broader platform compatibility, and revisions to address the great feedback you’ve given us over time. The code is available on GitHub now. The libraries are also available through NuGet, and also included in the Windows Azure SDK for .NET - October 2012; for more information and links see below.

image222In addition to the .NET 4.0 library, we are also releasing two libraries for Windows Store apps as Community Technology Preview (CTP) that fully supports the Windows Runtime platform and can be used to build modern Windows Store apps for both Windows RT (which supports ARM based systems), and Windows 8, which runs in any of the languages supported by Windows Store apps (JavaScript, C++, C#, and Visual Basic). This blog post serves as an overview of these libraries and covers some of the implementation details that will be helpful to understand when developing cloud applications in .NET regardless of platform.

What’s New

We have introduced a number of new features in this release of the Storage Client Library including:

  • Simplicity and Usability - A greatly simplified API surface which will allow developers new to storage to get up and running faster while still providing the extensibility for developers who wish to customize the behavior of their applications beyond the default implementation.
  • New Table Implementation - An entirely new Table Service implementation which provides a simple interface that is optimized for low latency/high performance workloads, as well as providing a more extensible serialization model to allow developers more control over their data.
  • Rich debugging and configuration capabilities – One common piece of feedback we receive is that it’s too difficult to know what happened “under the covers” when making a call to the storage service. How many retries were there? What were the error codes? The OperationContext object provides rich debugging information, real-time status events for parallel and complex actions, and extension points allowing users the ability to customize requests or enable end to end client tracing
  • Windows Runtime Support - A Windows Runtime component with support for developing Windows Store apps using JavaScript, C++,C#, and Visual Basic; as well as a Strong Type Tables Extension library for C++, C#, and Visual Basic
  • Complete Sync and Asynchronous Programming Model (APM) implementation - A complete Synchronous API for .Net 4.0. Previous releases of the client implemented synchronous methods by simply surrounding the corresponding APM methods with a ManualResetEvent, this was not ideal as extra threads remained blocked during execution. In this release all synchronous methods will complete work on the thread in which they are called with the notable exceptions of the stream implementations available via Cloud[Page|Block]Blob.Open[Read|Write]due to parallelism.
  • Simplified RetryPolicies - Easy and reusable RetryPolicies
  • .NET Client Profile – The library now supports the .NET Client Profile. For more on the .Net Client Profile see here.
  • Streamlined Authentication Model - There is now a single StorageCredentials type that supports Anonymous, Shared Access Signature, and Account and Key authentication schemes
  • Consistent Exception Handling - The library immediately will throw any exception encountered prior to making the request to the server. Any exception that occurs during the execution of the request will subsequently be wrapped inside a single StorageException type that wraps all other exceptions as well as providing rich information regarding the execution of the request.
  • API Clarity - All methods that make requests to the server are clearly marked with the [DoesServiceRequest] attribute
  • Expanded Blob API - Blob DownloadRange allows user to specify a given range of bytes to download rather than rely on a stream implementation
  • Blob download resume - A feature that will issue a subsequent range request(s) to download only the bytes not received in the event of a loss of connectivity
  • Improved MD5 - Simplified MD5 behavior that is consistent across all client APIs
  • Updated Page Blob Implementation - Full Page Blob implementation including read and write streams
  • Cancellation - Support for Asynchronous Cancellation via the ICancellableAsyncResult. Note, this can be used with .NET CancellationTokens via the CancellationToken.Register() method.
  • Timeouts - Separate client and server timeouts which support end to end timeout scenarios
  • Expanded Azure Storage Feature Support – It supports the 2012-02-12 REST API version with implementation for for Blob & Container Leases, Blob, Table, and Queue Shared Access Signatures, and Asynchronous Cross-Account Copy Blob
Design

When designing the new Storage Client for .NET and Windows Runtime, we set up a series of design guidelines to follow throughout the development process. In addition to these guidelines, there are some unique requirements when developing for Windows Runtime, and specifically when projecting into JavaScript, that has driven some key architectural decisions.

For example, our previous RetryPolicy was based on a delegate that the user could configure; however as this cannot be supported on all platforms we have redesigned the RetryPolicy to be a simple and consistent implementation everywhere. This change has also allowed us to simplify the interface in order to address user feedback regarding the complexity of the previous implementation. Now a user who constructs a custom RetryPolicy can re-use that same implementation across platforms.

Windows Runtime

A key driver in this release was expanding platform support, specifically targeting the upcoming releases of Windows 8, Windows RT, and Windows Server 2012. As such, we are releasing the following two Windows Runtime components to support Windows Runtime as Community Technology Preview (CTP):

  • Microsoft.WindowsAzure.Storage.winmd - A fully projectable storage client that supports JavaScript, C++, C#, and VB. This library contains all core objects as well as support for Blobs, Queues, and a base Tables Implementation consumable by JavaScript
  • Microsoft.WindowsAzure.Storage.Table.dll – A table extension library that provides generic query support and strong type entities. This is used by non-JavaScript applications to provide strong type entities as well as reflection based serialization of POCO objects
Breaking Changes

With the introduction of Windows 8, Windows RT, and Windows Server 2012 we needed to broaden the platform support of our current libraries. To meet this requirement we have invested significant effort in reworking the existing Storage Client codebase to broaden platform support, while also delivering new features and significant performance improvements (more details below). One of the primary goals in this version of the client libraries was to maintain a consistent API across platforms so that developer’s knowledge and code could transfer naturally from one platform to another. As such, we have introduced some breaking changes from the previous version of the library to support this common interface. We have also used this opportunity to act on user feedback we have received via the forums and elsewhere regarding both the .Net library as well as the recently released Windows Azure Storage Client Library for Java. For existing users we will be posting an upgrade guide for breaking changes to this blog that describes each change in more detail.

Please note the new client is published under the same NuGet package as previous 1.x releases. As such, please check any existing projects as an automatic upgrade will introduce breaking changes.

Additional Dependencies

The new table implementation depends on three libraries (collectively referred to as ODataLib), which are resolved through the ODataLib (version 5.0.2) packages available through NuGet and not the WCF Data Services installer which currently contains 5.0.0 versions. The ODataLib libraries can be downloaded directly or referenced by your code project through NuGet. The specific ODataLib packages are:

Namespaces

One particular breaking change of note is that the name of the assembly and root namespace has moved to Microsoft.WindowsAzure.Storage instead of Microsoft.WindowsAzure.StorageClient. In addition to aligning better with other Windows Azure service libraries this change allows developers to use the legacy 1.X versions of the library and the 2.0 release side-by-side as they migrate their applications. Additionally, each Storage Abstraction (Blob, Table, and Queue) has now been moved to its own sub-namespace to provide a more targeted developer experience and cleaner IntelliSense experience. For example the Blob implementation is located in Microsoft.WindowsAzure.Storage.Blob, and all relevant protocol constructs are located in Microsoft.WindowsAzure.Storage.Blob.Protocol.

Testing, stability, and engaging the open source community

We are committed to providing a rock solid API that is consistent, stable, and reliable. In this release we have made significant progress in increasing test coverage as well as breaking apart large test scenarios into more targeted ones that are more consumable by the public.

Microsoft and Windows Azure are making great efforts to be as open and transparent as possible regarding the client libraries for our services. The source code for all the libraries can be downloaded via GitHub under the Apache 2.0 license. In addition we have provided over 450 new Unit Tests for the .Net 4.0 library alone. Now users who wish to modify the codebase have a simple and light weight way to validate their changes. It is also important to note that most of these tests run against the Storage Emulator that ships via the Windows Azure SDK for .NET allowing users to execute tests without incurring any usage on their storage accounts. We will also be providing a series of higher level scenarios and How-To’s to get users up and running both simple and advanced topics relating to using Windows Azure Storage.

Summary

We have put a lot of work into providing a truly first class development experience for the .NET community to work with Windows Azure Storage. In addition to the content provided in these blog posts we will continue to release a series of additional blog posts which will target various features and scenarios in more detail, so check back soon. Hopefully you can see your past feedback reflected in this new library. We really do appreciate the feedback we have gotten from the community, so please keep it coming by leaving a comment below or participating on our forums.


Joe Giardino, Serdar Ozler, Justin Yu and Veena Udayabhanu of the Windows Azure Storage Team followed up with a detailed Windows Azure Storage Client Library 2.0 Breaking Changes & Migration Guide post on 10/29/2012:

imageThe recently released Windows Azure Storage Client Library for .Net includes many new features, expanded platform support, extensibility points, and performance improvements. In developing this version of the library we made some distinct breaks with Storage Client 1.7 and prior in order to support common paradigms across .NET and Windows Runtime applications.

image222Additionally, we have addressed distinct pieces of user feedback from the forums and users we’ve spoken with. We have made great effort to provide a stable platform for clients to develop their applications on and will continue to do so. This blog post serves as a reference point for these changes as well as a migration guide to assist clients in migrating existing applications to the 2.0 release. If you are new to developing applications using the Storage Client in .Net you may want to refer to the overview here to get acquainted with the basic concepts. This blog post will focus on changes and future posts will be introducing the concepts that the Storage Client supports.

Namespaces

The core namespaces of the library have been reworked to provide a more targeted Intellisense experience, as well as more closely align with the programming experience provided by other Windows Azure Services. The root namespace as well as the assembly name itself have been changed from Microsoft.WindowsAzure.StorageClient to Microsoft.WindowsAzure.Storage. Additionally, each service has been broken out into its own sub namespace. For example the blob implementation is located in Microsoft.WindowsAzure.Storage.Blob, and all protocol relevant constructs are in Microsoft.WindowsAzure.Storage.Blob.Protocol. Note: Windows Runtime component will not expose Microsoft.WindowsAzure.Storage.[Blob|Table|Queue].Protocol namespaces as they contain dependencies on .Net specific types and are therefore not projectable.

The following is a detailed listing of client accessible namespaces in the assembly.

  • Microsoft.WindowsAzure.Storage – Common types such as CloudStorageAccount and StorageException. Most applications should include this namespace in their using statements.
  • Microsoft.WindowsAzure.Storage.Auth – The StorageCredentials object that is used to encapsulate multiple forms of access (Account & Key, Shared Access Signature, and Anonymous).
  • Microsoft.WindowsAzure.Storage.Auth.Protocol – Authentication handlers that support SharedKey and SharedKeyLite for manual signing of requests
  • Microsoft.WindowsAzure.Storage.Blob – Blob convenience implementation, applications utilizing Windows Azure Blobs should include this namespace in their using statements
    • Microsoft.WindowsAzure.Storage.Blob.Protocol – Blob Protocol layer
  • Microsoft.WindowsAzure.Storage.Queue – Queue convenience implementation, applications utilizing Windows Azure Queues should include this namespace in their using statements
    • Microsoft.WindowsAzure.Storage.Queue.Protocol – Queue Protocol layer
  • Microsoft.WindowsAzure.Storage.Table – New lightweight Table Service implementation based on OdataLib. We will be posting an additional blog that dives into this new Table implementation in more greater detail.
    • Microsoft.WindowsAzure.Storage.Table.DataServices – The legacy Table Service implementation based on System.Data.Services.Client. This includes TableServiceContext, CloudTableQuery, etc.
    • Microsoft.WindowsAzure.Storage.Table.Protocol – Table Protocol layer implementation
  • Microsoft.WindowsAzure.Storage.RetryPolicies - Default RetryPolicy implementations (NoRetry, LinearRetry, and ExponentialRetry) as well as the IRetryPolicy interface
  • Microsoft.WindowsAzure.Storage.Shared.Protocol – Analytics objects and core HttpWebRequestFactory
What’s New
  • Added support for the .NET Client Profile, allowing for easier installation of your application on machines where the full .NET Framework has not been installed.
  • There is a new dependency on the three libraries released as OdataLib, which are available via nuget and codeplex.
  • A reworked and simplified codebase that shares a large amount of code between platforms
  • Over 450 new unit tests published to GitHub
  • All APIs that execute a request against the storage service are marked with the DoesServiceRequest attribute
  • Support for custom user headers
  • OperationContext – Provides an optional source of diagnostic information about how a given operation is executing. Provides mechanism for E2E tracing by allowing clients to specify a client trace id per logical operation to be logged by the Windows Azure Storage Analytics service.
  • True “synchronous” method support. SDK 1.7 implemented synchronous methods by simply wrapping a corresponding Asynchronous Programming Model (APM) method with a ManualResetEvent. In this release all work is done on the calling thread. This excludes stream implementations available via Cloud[Page|Block]Blob.OpenRead and OpenWrite and parallel uploads.
  • Support for Asynchronous cancellation via ICancellableAsyncResult. Note this can be hooked up to .NET cancellation tokens via the Register() method as illustrated below:

ICancellableAsyncResult result = container.BeginExists(callback, state);

token.Register((o) => result.Cancel(), null /* state */);

  • Timeouts – The Library now allows two separate timeouts to be specified. These timeouts can be specified directly on the service client (i.e. CloudBlobClient) or overridden via the RequestOptions. These timeouts are nullable and therefore can be disabled.
    • The ServerTimeout is the timeout given to the server for each request executed in a given logical operation. An operation may make more than one requests in the case of a Retry, parallel upload etc., the ServerTimeout is sent for each of these requests. This is set to 90 seconds by default.
    • The MaximumExecutionTime provides a true end to end timeout. This timeout is a client side timeout that spans all requests, including any potential retries, a given operation may execute. This is disabled by default.
  • Full PageBlob support including lease, cross account copy, and read/write streams
  • Cloud[Block|Page]Blob DownloadRange support
  • Blobs support download resume, in the event of an error the subsequent request will be truncated to specify a range at the correct byte offset.
  • The default MD5 behavior has been updated to utilize a FIPS compliant implementation. To use the default .NET MD5 please set CloudStorageAccount.UseV1MD5 = true;
Breaking Changes
General
  • Dropped support for .NET Framework 3.5, Clients must use .Net 4.0 or above
  • Cloud[Blob|Table|Queue]Client.ResponseReceived event has been removed, instead there are SendingRequest and ResponseReceived events on the OperationContext which can be passed into each logical operation
  • All Classes are sealed by default to maintain consistency with Windows RT library
  • ResultSegments are no longer generic. For example, in Storage Client 1.7 there is a ResultSegment<CloudTable>, while in 2.0 there is a TableResultSegment to maintain consistency with Windows RT library.
  • RetryPolicies
    • The Storage Client will no longer prefilter certain types of exceptions or HTTP status codes prior to evaluating the users RetryPolicy. The RetryPolicies contained in the library will by default not retry 400 class errors, but this can be overridden by implementing your own policy
    • A retry policy is now a class that implements the IRetryPolicy interface. This is to simplify the syntax as well as provide commonality with the Windows RT library
  • StorageCredentials
    • CloudStorageAccount.SetConfigurationSettingPublisher has been removed. Instead the members of StorageCredentials are now mutable allowing users to accomplish similar scenarios in a more streamlined manner by simply mutating the StorageCredentials instance associated with a given client(s) via the provided UpdateKey methods.
    • All credentials types have been simplified into a single StorageCredentials object that supports Anonymous requests, Shared Access Signature, and Account and Key authentication.
  • Exceptions
    • StorageClientException and StorageServerException are now simplified into a single Exception type: StorageException. All APIs will throw argument exceptions immediately; once a request is initiated all other exceptions will be wrapped.
    • StorageException no longer directly contains ExtendedErrorInformation. This has been moved inside the RequestResult object which tracks the current state of a given request
  • Pagination has been simplified. A segmented result will simply return up to the maximum number of results specified. If a continuation token is received it is left to the user to make any subsequent requests to complete a given page size.
Blobs
  • All blobs must be accessed via CloudPageBlob or CloudBlockBlob, the CloudBlob base class has been removed. To get a reference to the concrete blob class when the client does not know the type please see the GetBlobReferenceFromServer on CloudBlobClient and CloudBlobContainer
  • In an effort to be more transparent to the application layer the default parallelism is now set to 1 for blob clients. (This can be configured via CloudBlobClient.ParallelOperationThreadCount) In previous releases of the sdk, we observed many users scheduling multiple concurrent blob uploads to more fully exploit the parallelism of the system. However, when each of these operations was internally processing up to 8 simultaneous operations itself there were some adverse side effects on the system. By setting parallelism to 1 by default it is now up to the user to opt in to this concurrent behavior.
  • CloudBlobClient.SingleBlobUploadThresholdInBytes can now be set as low as 1 MB.
  • StreamWriteSizeInBytes has been moved to CloudBlockBlob and can now be set as low as 16KB. Please note that the maximum number of blocks a blob can contain is 50,000 meaning that with a block size of 16KB, the maximum blob size that can be stored is 800,000KB or ~ 781 MB.
  • All upload and download methods are now stream based, the FromFile, ByteArray, Text overloads have been removed.
  • The stream implementation available via CloudBlockBlob.OpenWrite will no longer encode MD5 into the block id. Instead the block id is now a sequential block counter appended to a fixed random integer in the format of [Random:8]-[Seq:6].
  • For uploads if a given stream is not seekable it will be uploaded via the stream implementation which will result in multiple operations regardless of length. As such, when available it is considered best practice to pass in seekable streams.
  • MD5 has been simplified, all methods will honor the three MD5 related flags exposed via BlobRequestOptions
    • StoreBlobContentMD5 – Stores the Content MD5 on the Blob on the server (default to true for Block Blobs and false for Page Blobs)
    • UseTransactionalMD5 – Will ensure each upload and download provides transactional security via the HTTP Content-MD5 header. Note: When enabled, all Download Range requests must be 4MB or less. (default is disabled, however any time a Content-MD5 is sent by the server the client will validate it unless DisableContentMD5Validation is set)
    • DisableContentMD5Validation – Disables any Content-MD5 validation on downloads. This is needed to download any blobs that may have had their Content-MD5 set incorrectly
    • Cloud[Page|Block]Blob no longer exposes BlobAttributes. Instead the BlobProperties, Metadata, Uri, etc. are exposed on the Cloud[Page|Block]Blob object itself
  • The stream available via Cloud[Page|Block]Blob.OpenRead() does not support multiple Asynchronous reads prior to the first call completing. You must first call EndRead prior to a subsequent call to BeginRead.
  • Protocol
    • All blob Protocol constructs have been moved to the Microsoft.WindowsAzure.Storage.Blob.Protocol namespace. BlobRequest and BlobResponse have been renamed to BlobHttpWebRequestFactory and BlobHttpResponseParsers respectively.
    • Signing Methods have been removed from BlobHttpWebRequestFactory, alternatively use the SharedKeyAuthenticationHandler in the Microsoft.WindowsAzure.Storage.Auth.Protocol namespace
Tables
  • New Table Service Implementation - A new lightweight table implementation is provided in the Microsoft.WindowsAzure.Storage.Table namespace. Note: For backwards compatibility the Microsoft.WindowsAzure.Storage.Table.DataServices.TableServiceEntity was not renamed, however this entity type is not compatible with the Microsoft.WindowsAzure.Storage.Table.TableEntity as it does not implement ITableEntity interface.
  • DataServices
    • The legacy System.Data.Services.Client based implementation has been migrated to the Microsoft.WindowsAzure.Storage.Table.DataServices namespace.
    • The CloudTableClient.Attach method has been removed. Alternatively, use a new TableServiceContext
    • TableServiceContext will now protect concurrent requests against the same context. To execute concurrent requests please use a separate TableServiceContext per logical operation.
    • TableServiceQueries will no longer rewrite the take count in the URI query string to take smaller amounts of entities based on the legacy pagination construct. Instead, the client side Lazy Enumerable will stop yielding results when the specified take count is reached. This could potentially result in retrieving a larger number of entities from the service for the last page of results. Developers who need a finer grained control over the pagination of their queries should leverage the segmented execution methods provided.
  • Protocol
    • All Table protocol constructs have been moved to the Microsoft.WindowsAzure.Storage.Table.Protocol namespace. TableRequest and TableResponse have been renamed to TableHttpWebRequestFactory and TableHttpResponseParsers respectively.
    • Signing Methods have been removed from TableHttpWebRequestFactory, alternatively use the SharedKeyLiteAuthenticationHandler in the Microsoft.WindowsAzure.Storage.Auth.Protocol namespace
Queues
  • Protocol
    • All Queue protocol constructs have been moved to the Microsoft.WindowsAzure.Storage.Queue.Protocol namespace. QueueRequest and QueueResponse have been renamed to QueueHttpWebRequestFactory and QueueHttpResponseParsers respectively.
    • Signing Methods have been removed from QueueHttpWebRequestFactory, alternatively use the SharedKeyAuthenticationHandler in the Microsoft.WindowsAzure.Storage.Auth.Protocol namespace
Migration Guide

In addition to the detailed steps above, below is a simple migration guide to help clients begin migrating existing applications.

Namespaces

A legacy application will need to update their “using” to include:

  • Microsoft.WindowsAzure.Storage
  • If using credentials types directly add a using statement to Microsoft.WindowsAzure.Storage.Auth
  • If you are using a non-default RetryPolicy add a using statement to Microsoft.WindowsAzure.Storage.RetryPolicies
  • For each Storage abstraction add the relevant using statement Microsoft.WindowsAzure.Storage.[Blob|Table|Queue]
Blobs
  • Any code that access a blob via CloudBlob will have to be updated to use the concrete types CloudPageBlob and CloudBlockBlob. The listing methods will return the correct object type, alternatively you may discern this from via FetchAttributes(). To get a reference to the concrete blob class when the client does not know the type please see the GetBlobReferenceFromServer on CloudBlobClient and CloudBlobContainer objects
  • Be sure to set the desired Parallelism via CloudBlobClient.ParallelOperationThreadCount
  • Any code that may rely on the internal MD5 semantics detailed here, should update to set the correct MD5 flags via BlobRequestOptions
Tables
  • If you are migrating an existing Table application you can choose to re-implement it via the new simplified Table Service implementation, otherwise add a using to the Microsoft.WindowsAzure.Storage.Table.DataServices namespace

DataServiceContext (the base implementation of the TableServiceContext) is not threadsafe, subsequently it has been considered best practice to avoid concurrent requests against a single context, though not explicitly prevented. The 2.0 release will now protect against simultaneous operations on a given context. Any code that may rely on concurrent requests on the same TableServiceContext should be updated to execute serially, or utilize multiple contexts.

Summary

This blog posts serves as a guide to the changes introduced by the 2.0 release of the Windows Azure Storage Client libraries.

We very much appreciate all the feedback we have gotten from customers and through the forums, please keep it coming. Feel free to leave comments below,

Joe Giardino
Serdar Ozler
Veena Udayabhanu
Justin Yu

Windows Azure Storage

Resources

Get the Windows Azure SDK for .Net


Denny Lee (@dennylee) answered Oh where, oh where did my S3N go? (in Windows Azure HDInsight) Oh where, Oh where, can it be?! in a 10/29/2012 post:

imageAs noted in my previous post Connecting Hadoop on Azure to your Amazon S3 Blob storage, you could easily setup HDInsight Azure to go against your Amazon S3 / S3N storage. With the updates to HDInsight, you’ll notice that Manage Cluster dialog no longer includes the quick access to Set up S3.

image_thumb[4]

imageYet, there are times where you may want to connect your HDInsight cluster to access your S3 storage. Note, this can be a tad expensive due to transfer costs.

To get S3 setup on your Hadoop cluster, from the HDInsight dashboard click on the Remote Desktop tile so you can log onto the name node.

image_thumb[8]

Once you are logged in, open up the Hadoop Command Line Interface link from the desktop.

image_thumb[10]

From here, switch to the c:\apps\dist\Hadoop[]\conf folder and edit the core-site.xml file. The code to add is noted below.

<property>
  <name>fs.s3n.awsAccessKeyId</name>
  <value>[Access Key ID]</value>
</property>
<property>
  <name>fs.s3n.awsSecretAccessKey</name>
  <value>[Secret Access Key]</value>
</property>

Once this is setup, you will be able to access your S3 account from your Hadoop cluster.


Herve Roggero (@hroggero) reported Faster, Simpler access to Azure Tables with Enzo Azure API in a 10/29/2012 post:

imageAfter developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details.

Simpler Code

imageMy first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time).

Strongly Typed

Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface.

// With the SDK
public class MyData1 : TableServiceEntity
{
public string Message { get; set; }
public string Level { get; set; }
public string Severity { get; set; }
}

// With the Enzo Azure API
public class MyData2 : BaseAzureTable
{
public string Message { get; set; }
public string Level { get; set; }
public string Severity { get; set; }
}

Simpler Code

Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table):

// With the Azure SDK
public List<MyData1> FetchAllEntities()
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
TableServiceContext serviceContext = tableClient.GetDataServiceContext();

CloudTableQuery<MyData1> partitionQuery =
(from e in serviceContext.CreateQuery<MyData1>(_tableName)
select new MyData1()
{
PartitionKey = e.PartitionKey,
RowKey = e.RowKey,
Timestamp = e.Timestamp,
Message = e.Message,
Level = e.Level,
Severity = e.Severity
}).AsTableServiceQuery<MyData1>();

return partitionQuery.ToList();
}

This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp).

The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this:

// With the Enzo Azure API
public List<MyData2> FetchAllEntities()
{
AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);
List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");
return res;
}

As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).

Fetch Strategies

Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously):

public List<MyData2> FetchAllEntitiesGUID()
{
AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);
List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");
return res;
}

Faster Results
With Sequential Fetch Methods

Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster. For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement).

image

With Fetch Strategies

When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth.

image

Additional Methods

The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities:

  • - Support for batch updates, deletes and inserts
  • - Conversion of entities to DataRow, and List<> to a DataTable
  • - Extension methods for Delete, Merge, Update, Insert
  • - Support for asynchronous calls and cancellation
  • - Support for fetch statistics (total bytes, total REST calls, retries…)

For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx).

About Herve Roggero

Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

image_thumb5

<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

•• My (@rogerjenn) Windows Azure Mobile Services creates backends for Windows 8, iPhone tip of 10/31/2012 for TechTarget’s SearchCloudComputing.com blog begins:

imageWindows front-end developers have enough on their plates simply migrating from Windows 32 desktop and Web apps to XAML/C# or HTML5/JavaScript and the new Windows Runtime (WinRT) for Windows Store apps.

imageThe cost of teaching teams new back-end coding skills for user authentication and authorization with Microsoft Accounts (formerly Live IDs) could be the straw the breaks the camel's back. Add to that messaging with Windows Notification Services (WNS), email and SMS as well as structured storage with Windows Azure SQL Databases (WASDB) and targeting Windows 8 and Azure: Microsoft's legacy escape route, and it all becomes a bit much.

imageTo ease .NET developers into this brave new world of multiple devices ranging from Win32 legacy laptops to touchscreen tablets and smartphones, Microsoft's Windows Azure team released a Windows Azure Mobile Services (WAMS) Preview. The initial release supports Windows Store apps (formerly Windows 8 applications) only; Microsoft promises support for iOS and Android devices soon.

The release consists of the following:

  1. A Mobile Services option added to the new HTML Windows Azure Management Portal generates the back-end database, authentication/authorization, notification and server-side scripting services automatically.
  2. An open-source WAMS Client SDK Preview with an Apache 2.0 license is downloadable from GitHub; the SDK requires Windows 8 RTM and Visual Studio 2012 RTM.
  3. A sample doto application demonstrates WAMS multi-tenant features and is also available from GitHub. This sample is in addition to the tutorial walkthroughs offered from the Mobile Services Developer pages and the OakLeaf Systems blog.
  4. A Store menu add-in to Visual Studio 2012's Project menu accelerates app registration with and deployment to the Windows Store.
  5. A form opened in Visual Studio 2012 can obtain a developer license for Windows 8, which is required for creating Windows Store apps.

WAMS eliminates the need to handcraft the more than a thousand lines of XAML and C# required to implement and configure Windows Azure SQL Database (WASDB), as well as corresponding Access Control and Service Bus notification components for clients and device-agnostic mobile back ends. The RESTful back end uses OData's new JSON Light payload option to support multiple front-end operating systems with minimum data overhead. WAMS's Dynamic Schema feature auto-generates WASDB table schemas and relationships so front-end designers don't need database design expertise.

Expanding the WAMS device and feature repertoire

Scott Guthrie, Microsoft corporate VP in charge of Windows Azure development, announced the following device support and features in a blog post:

  1. iOS support, which enables companies to connect iPhone and iPad apps to Mobile Services
  2. Facebook, Twitter and Google authentication support with Mobile Services
  3. Blob, Table, Queue and Service Bus support from within Mobile Services
  4. The ability to send email from Mobile Services (in partnership with SendGrid)
  5. The ability to send SMS messages from Mobile Services (in partnership with Twilio)
  6. The capability to deploy mobile services in the West U.S. region.

The update didn't modify the original Get Started with data or Get Started with push notifications C# and JavaScript sample projects from the Developer Center or Paul Batum's original DoTo sample application in the GitHub samples folder. …

Read more.

Post-publication update: Windows Azure Mobile Services now supports Windows Phone 8 apps.


•• Josh Holmes (@joshholmes) described Custom Search with Azure Mobile Services in JavaScript in an 11/1/2012 post:

imageI’ve published my first little Windows 8 app using the Azure Mobile Services in JavaScript. It was incredibly quick to get up and running and more flexible than I thought it would.

The one thing that was tricky was that I’m using JavaScript/HTML5 to build my app and since I don’t have LINQ in JavaScript, doing a custom date search was difficult. Fortunately I got to sit down with Paul Batum from the Azure Mobile Services team and he learned me a thing or two.

I already knew the backend of Azure Mobile Services was node.js. What I didn’t realize is that we can pass in a javascript function to be executed server side for a highly custom search the way that we can with LINQ from C#. The syntax is a little weird but it works a treat.

itemTable.where(function (startDate, endDate) {
            return this.Date >= endDate && this.Date <= startDate;
        }, startDate, endDate)
    .read()
    .done(function (results) {
        for (var i = 0; i < results.length; i++) {
           //do something interesting
        };

imageNotice that inside the where function, I’m passing in another function. This function gets passed back and operates server side. The slightly wonky part is that the function has to accept the parameters that you pass in as well as you have to pass the variables that will be passed to this function. So reading that sample carefully, see that we’re passing three variables to the server side including the function and then the two actual variables that we want to pass to the function that executes on the server.

This allows for some awesome flexibility, well beyond custom date searches. :)


• Glenn Gailey (@ggailey777) expands on Windows Azure Mobile Services for Windows Phone 8 in his Windows Phone 8 is Finally Here! post of 10/31/2012:

imageThis is a great day for Microsoft’s Windows Phone platform, with the official launch of Windows Phone 8 along with the following Windows Phone 8-related clients and toolkits:

Windows Phone SDK 8.0

imageYesterday morning we announced the release of the Windows Phone SDK 8.0. I only got hold of this new SDK about a week ago, so I haven’t had much time to play around with it, but here’s what’s in it:

  • Visual Studio Express 2012 for Windows Phone
  • Windows Phone emulator(s)
  • Expression blend for Windows Phone
  • Team Explorer

† One reason that it took me so long to start using Windows Phone SDK 8.0 (aside from the difficulty of even getting access to it inside of Microsoft) was the stringent platform requirement of “a machine running Windows 8 with SLAT enabled.” This meant that I had to wait to upgrade my x64 Lenovo W520 laptop to Windows 8 before I could start working with Windows Phone 8 apps. This is because the emulator runs in a VM. It seems much faster and more stabile than the old 7.1 version, but its virtual networking adapters do confuse my laptop from time to time.

imageThe first thing that I noticed was support for doing much cooler things with live tiles that was even possible in 7.1, including providing different tile sizes in the app that customers can set and new kinds of notifications. For more information about what’s new, see the post Introducing Windows Phone SDK 8.0.

Windows Phone Toolkit for Windows Phone 8

This critical toolkit has always been chock full of useful controls (I’ve used it quite a bit in my apps), and it’s now updated to support Windows Phone 8 development and the new SDK. You can get the toolkit from NuGet. Documentation and sources are published on CodePlex at http://phone.codeplex.com.

OData Client Library for Windows Phone 8

Microsoft had already released OData v3 client support for both .NET Framework and Windows Store apps, but support for Windows Phone had been noticeably missing from the suite of OData clients. Yesterday, we also unveiled the OData Client Tools for Windows Phone Apps. As is now usually the case for OData clients, this library isn’t included with the Windows Phone SDK 8.0, but you can easily downloaded it from here and install it with the Windows Phone SDK 8.0.

Mobile Services SDK for Windows Phone 8

The main reason that I needed to install the Windows Phone SDK 8.0 was to test the new support in Windows Azure Mobile Services. Since Microsoft is, of course, committed to supporting it’s Windows Phone platform, Mobile Services also released the Windows Phone 8 SDK, along with updates to the Management Portal for Windows Phone 8. This is all tied to this week’s //BUILD conference, hosted by Microsoft, and the release of the Windows Phone SDK 8.0.

image

Mobile Services quickstart for Windows Phone 8 in the Windows Azure Management Portal

If you’ve been using the Mobile Services SDK for Windows Store apps, the .NET Framework libraries in this SDK are nearly identical to the client library in the Windows Phone 8 SDK (with some behavioral exceptions). As such , anything that you could do with Mobile Services in a Windows Store app you can also do in a Windows Phone app. As such, we created Windows Phone 8 versions of the original Windows Store quickstarts:

I’m definitely excited to (finally) be working on Windows Phone 8 apps and hope to be blogging more in the future about Mobile Services and Windows Phone.

With so much great buzz around Windows Phone 8, now I just need AT&T to start selling them so I can go get mine.


• John Koetsier reported from BUILD 2012: Microsoft demos simple cloud-enabling of mobile apps with Azure Mobile Services in a 10/31/2012 post to VentureBeat’s /Cloud blog:

imageLooking to cloud-enable your mobile app? Looks like Microsoft can help make that a lot easier.

Microsoft just demoed some very slick new mobile and cloud connections today at its BUILD conference in Redmond, showing how simple it is for developers to store their data in the cloud and perform operations on that data.

BUILD 2012: Microsoft demos simple cloud-enabling of mobile apps with Azure Mobile Services

Josh Twist from Windows Azure Mobile Services — which he announced now support Windows Phone 8 — connected an app to Azure authentication services live onstage. Authentication protocols not only include Microsoft accounts, but also Facebook, Twitter, and Google accounts, and Twist showed how, in just a few lines of code, developers can add social login to their apps.

This works on any app on iOS as well as more traditional desktop apps for Windows Store, and now, of course, Windows Phone 8.

imageEven more interestingly, Twist demoed how simple it is to set event handlers in Azure that execute code securely and automatically in the cloud whenever data changes. One example he showed was to automatically grab a user’s Twitter avatar when the user logs in via Twitter. In a few lines of Javascript, saved on Azure and triggered automatically when a user logged in, Mobile Services talked to Twitter, retrieved the user icon, saved it locally, and sent it to the mobile app for use in the user interface.

Impressive!

Then Twist connected the cloud app to a live tile on his Windows 8 PC, enabling quick and easy desktop monitoring of his mobile app’s activity. Also impressive.

A preview is available today, Twist said, and developers who sign up will receive 10 mobile services for free.

And the Windows Phone app data arrives on the Windows 8 desktop, via Azure Mobile Services

Image credits: vernieman via photopin cc, Microsoft


CloudBeat 2012

CloudBeat 2012 assembles the biggest names in the cloud’s evolving story to
uncover real cases of revolutionary adoption. Unlike other cloud
events, the customers themselves are front and center. Their
discussions with vendors and other experts give you rare insights into
what really works, who’s buying what, and where the industry is going.
CloudBeat takes place Nov. 28-29 in Redwood City, Calif. Register today!


• The Windows Azure Mobile Services team posted a Get started with Mobile Services tutorial for Windows Phone 8 on 10/30/2012:

image

imageThis tutorial shows you how to add a cloud-based backend service to a Windows Phone 8 app using Windows Azure Mobile Services. In this tutorial, you will create both a new mobile service and a simple To do list app that stores app data in the new mobile service.

A screenshot from the completed app is below:

Note: To complete this tutorial, you need a Windows Azure account that has the Windows Azure Mobile Services feature enabled.

Create a new mobile service

Follow these steps to create a new mobile service.

  1. Log into the Management Portal.
  2. At the bottom of the navigation pane, click +NEW.

     

  3. Expand Mobile Service, then click Create.

    This displays the New Mobile Service dialog.

  4. In the Create a mobile service page, type a subdomain name for the new mobile service in the URL textbox and wait for name verification. Once name verification completes, click the right arrow button to go to the next page.

    This displays the Specify database settings page.

    Note: As part of this tutorial, you create a new SQL Database instance and server. You can reuse this new database and administer it as you would any other SQL Database instance. If you already have a database in the same region as the new mobile service, you can instead choose Use existing Database and then select that database. The use of a database in a different region is not recommended because of additional bandwidth costs and higher latencies.

  5. In Name, type the name of the new database, then type Login name, which is the administrator login name for the new SQL Database server, type and confirm the password, and click the check button to complete the process.

    Note: When the password that you supply does not meet the minimum requirements or when there is a mismatch, a warning is displayed.
    We recommend that you make a note of the administrator login name and password that you specify; you will need this information to reuse the SQL Database instance or the server in the future.

You have now created a new mobile service that can be used by your mobile apps.

Create a new Windows Phone app

Once you have created your mobile service, you can follow an easy quickstart in the Management Portal to either create a new app or modify an existing app to connect to your mobile service.

In this section you will create a new Windows Phone 8 app that is connected to your mobile service.

  1. In the Management Portal, click Mobile Services, and then click the mobile service that you just created.

  2. In the quickstart tab, click Windows Phone 8 under Choose platform and expand Create a new Windows Phone 8 app.

    This displays the three easy steps to create a Windows Phone app connected to your mobile service.

  3. If you haven't already done so, download and install Visual Studio 2012 Express for Windows Phone and the Mobile Services SDK on your local computer or virtual machine.

  4. Click Create TodoItems table to create a table to store app data.

  5. Under Download and run app, click Download.

    This downloads the project for the sample To do list application that is connected to your mobile service. Save the compressed project file to your local computer, and make a note of where you save it.

Run your Windows Phone app

The final stage of this tutorial is to build and run your new app.

  1. Browse to the location where you saved the compressed project files, expand the files on your computer, and open the solution file in Visual Studio 2012 Express for Windows Phone.

  2. Press the F5 key to rebuild the project and start the app.

  3. In the app, type meaningful text, such as Complete the tutorial and then click Save.

    This sends a POST request to the new mobile service hosted in Windows Azure. Data from the request is inserted into the TodoItem table. Items stored in the table are returned by the mobile service, and the data is displayed in the list.

    Note: You can review the code that accesses your mobile service to query and insert data, which is found in the MainPage.xaml.cs file.

  4. Back in the Management Portal, click the Data tab and then click the TodoItems table.

    This lets you browse the data inserted by the app into the table.

Next Steps

Now that you have completed the quickstart, learn how to perform additional important tasks in Mobile Services:


• Cihan Biyikoglu (@cihangirb) announced a SQL PASS Summit 2012 – Disaster Recovery for Federations in Azure SQL DB session on 10/30/2012:

imageWe have been working heads down on some new improvement to federations around disaster recovery and ease of working with a group of databases. We are ready to demo one key improvement we are building for federations to you at PASS 2012: the new SWITCH statement that make it easy to manipulate a group flexibly to move members dbs in and out.

imageWe’ll talk about why this is important to DR and how the DR scenarios will evolve around these concepts when it comes to rolling back an application upgrade gone bad, or creating a copy of a federation on another server for safekeeping and more.

Look forward to seeing all of you there; http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2906


Han, MSFT reported SQL Data Sync now available in the East and West US Data Centers! in a 10/30/2012 post:

imageWe are excited to announce that we have just completed the deployment of SQL Data Sync into the East and West US data centers. Now, what does that mean to you? For those who intend to have their sync hubs in the East or West US, you can now provision the Sync Server in the respective regions thus allowing better sync performance for the particular sync groups.

It’s about time.


The TechNet Wiki recently updated its detailed How to Shard with Windows Azure SQL Database article. From the Introduction:

imageDatabase sharding is a technique of horizontal partitioning data across multiple physical servers to provide application scale-out. Windows Azure SQL Database is a cloud database service from Microsoft that provides database functionality as a utility service, offering many benefits including rapid provisioning, cost-effective scalability, high availability and reduced management overhead.

image222SQL Database combined with database sharding techniques provides for virtually unlimited scalability of data for an application. This paper provides an overview of sharding with SQL Database, covering challenges that would be experienced today, as well as how these can be addressed with features to be provided in the upcoming releases of SQL Database. …

Table of Contents
  1. Introduction
  2. An Overview of Sharding
  3. A Conceptual Framework for Sharding with SQL Database
  4. Application Design for Sharding with SQL Database and ADO.NET
  5. Applying the Prescribed ADO.NET Based Sharding Library
  6. SQL Database Federations
  7. Summary
  8. See Also

1 Introduction

This document provides guidance on building applications that utilize a technique of horizontal data partitioning known as sharding, where data is horizontally partitioned across multiple physical databases, to provide scalability of the application as data size or demand upon the application increases.

A specific goal of this guidance is to emphasize how SQL Database can facilitate a sharded application, and how an application can be designed to utilize the elasticity of SQL Database to enable for highly cost effective, on-demand, and virtually limitless scalability.

This guidance will present an overview of patterns and best practices for implementing sharding using SQL Database. Introduced are two concepts that both use .NET and SQL Database to assist with implementing sharding, an ADO.NET based provider for sharding, and extensions to SQL Database for shard federations.

This guidance will focus on providing a framework for understanding how to perform sharding today with SQL Database and how it will evolve in the near future. Future guidance will address additional facets of sharding in greater detail.

Specifically, this guidance will focus on:

  • The basic concepts involved in horizontal partitioning and sharding
  • The challenges when sharding an application
  • Common patterns for implementing sharding
  • The benefits of using SQL Database as your application’s sharding infrastructure
  • A high level design of an ADO.NET based sharding library, and
  • An introduction to SQL Database Federations which adds sharding capabilities directly into SQL Database
1.1 Sharding: The Horizontal Scaling of Data for an Application

Sharding is an application pattern for improving the scalability and throughput of large-scale data solutions. To “shard” an application is the process of breaking an application’s logical database into smaller chunks of data, and distributing the chunks of data across multiple physical databases to achieve application scalability. Each physical database in this architecture is what is referred to as a shard.

In a sharded application as is outlined in this guidance, it is the rows of a logical database that are spread across separate physical databases. This differs from a solution that is vertically partitioned by putting the entire set of data entities into a separate physical database, such as putting orders and customers in different databases. The partitioning of data by value (the shard key) provides for the potential of greater scalability than a vertically partitioned design by being able to continuously break the data into more chunks that are spread across an increasing amount of shards as increasing demand requires.

The proposed implementation will map data to specific shards by applying one or more strategies upon a “sharding key” which is the primary key in one of the data entities. Related data entities are then clustered into a related set based upon the shared shard key and this unit is referred to as an atomic unit. All records in an atomic unit are always stored in the same shard.

The atomic unit forms a core architectural concept that is used by the sharding infrastructure as a basis for consistent access of data as shards are dynamically added and removed from an application. The sharding infrastructure will ensure that all related entities in an atomic unit are always within the same physical shard. This facilitates joins and aggregations, and will also be a key benefit of SQL Database Federations that will eventually be implemented into the sharding infrastructure, enabling consistent elastic scale up and down of the sharding solution.

By focusing on the atomic unit as a consistent piece of data, the sharding infrastructure will eventually be able to take action automatically based upon rules to add or remove shards to handle increasing or decreasing demand. As shards are added and removed, atomic units will need to move between shards to optimize data access. During the process of moving data the sharding infrastructure can ensure that any specific atomic unit of data will always be available either in its current or new shard with the location transparent to the application.

1.2 The benefits of sharding with SQL Database

Sharding as a concept has been available in several forms for years and has mostly focused on implementations based upon self-hosted infrastructure that address the following needs:

  • Scale out using tens, hundreds or thousands of database nodes using commodity hardware instead of expensive scale-up systems,
  • To achieve scalable performance as the number of nodes increases, and
  • Build a solution with an excellent price-performance ratio derived from the use of commodity hardware instead of expensive application servers

It is possible to build a self-hosted sharding solution, and indeed most to date have been built in this manner, but self-hosted sharding solutions also have several significant issues:

  • Scale-out is done at the application level, and the application must manage all of the details of the partitioning of data (many of these topics are discussed later),
  • Redundancy and high availability is implemented at the application level instead of in hardware or the cloud fabric,
  • Rebalancing of shards is challenging and often an offline process,
  • Physical administration of the hardware and system level software becomes increasingly difficult as more databases are added, and
  • Capital expenditure on servers is prohibitive.

SQL Database provides a unique ability for solving these issues of building a self-hosted sharded application through the elastic capabilities provided by the Azure cloud platform as a service. Key benefits are:

  • All infrastructure is managed
    SQL Database abstracts the logical from the physical administration. SQL Database handles all of the physical level tasks such as server patching, hard drives, and storage while customers only need to handle the administration of the logical databases.
  • Elastic provisioning of nodes
    Creating a new shard and adding it to a sharded application can be performed by using Windows Azure Management Portal or through DDL. SQL Database eliminates the need to take months to procure, configure and deploy the new hardware and database systems. Additionally, applications that need tens, hundreds or even thousands of databases for a short period of time can do this seamlessly and then de-provision the databases when the demand drops.
  • Pay-as-you-go pricing
    SQL Database has a linear pricing model that is very attractive for sharding solutions as the amount per month per gigabyte of storage is linear as the database size increases. This allows customers to have the ability to very accurately predict the costs that will be incurred as a system grows (and shrinks). Also, because databases are available in different editions (web and business) with differing ceilings for size limits, different users can also have control over the granularity of the increased costs as a sharding solution expands in data size.
  • High availability
    SQL Database provides a high availability SLA of 99.9% for all databases, no need to implement RAID and other availability techniques yourself.

Cloud infrastructure does provide its own complexities, and building a sharded solution on today’s SQL Database requires careful design given their unique complexities:

  • Maximum resource limitations on individual SQL Databases
    There are practical and technical limitations imposed in SQL Database on the maximum data set size in a single database. While these sizes will likely continue to increase over time, they will still remain relatively small compared to the overall size required of some sharded solutions.
  • Multi-tenancy performance throttling and connection management
    SQL Databases can be performance throttled in effort to provide a minimum quality of service level. Queries issued by an application that consume more than their fair share of a tenant will be terminated by SQL Database, and since SQL Database is multi-tenant queries issued by other application could also impact the application’s performance. SQL Database also terminates idle sessions at regular intervals to preserve system resources, thereby requiring an application to be able to account for automatic session recovery. As shards are being added and removed dynamically, managing connections becomes a challenge, as an application must be able to reestablish connections as the set of shards changes.
  • Sharding with the current SQL Database is application level
    The current version of SQL Database does not have any explicit support for sharding, and therefore the application is currently responsible for all facets of sharding until the capabilities are in SQL Database Federations.
  • Rebalancing of shards is an offline process
    The addition or removal of shards from a sharded application can be a complicated process, as rules for finding data must be changed as the physical infrastructure is modified. In addition to the previously mentioned issues with connections, there are also data level issues such as key management that may require rewriting of keys as data is moved between shards. This often leads to the fact that sharding solutions, even with SQL Database being able to create databases almost instantly, still may need to go offline for an in-determinant duration as data is adapted to the new federation structure.

To address these complexities in effort to facilitate elastic sharding, upcoming releases of SQL Database will add support directly into the cloud infrastructure to manage the dynamic addition of shards and movement of atomic units of data consistently and transparently between shards. These capabilities are currently referred to as SQL Database Federations (described in section 6) with an outline of the concepts in Figure 1 below.

Figure 1: SQL Database Federations

Until Federations are added to SQL Database (and even after) there are still concerns that warrant the use of an application framework to facilitate sharding to offload many of these responsibilities from the application. In this document we’ll refer to the application framework as an ADO.NET based sharding library.

Creating a custom ADO.NET based sharding library combined with functionality to be provided in later releases of SQL Database including SQL Database Federations, will provide the most advanced, flexible, scalable and economic sharding application infrastructure available.

1.3 The Remainder of the Guidance

The remainder of this guidance will give a high-level overview of various concepts and patterns in any sharded application, and provide guidance on specific techniques commonly used to manage sharded data in SQL Database. This will be extended into a discussion of techniques using ADO.NET and SQL Database for sharding, followed with more detailed examples and a brief introduction of the features to be added to SQL Database when it supports sharding.

image_thumb18


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

‡ Alex James (@adjames) described OData in WebAPI – Microsoft ASP.NET Web API OData 0.2.0-alpha release in an 11/2/2012 post:

imageSince my last set of blog posts on OData support in WebAPI (see parts 1 & 2) we’ve been busy adding support for Server Driven Paging, Inheritance and OData Actions. Our latest alpha release on Nuget has preview level support for these features. Lets explore the new features and a series of extensions you can use to get them working…

Server Driven Paging:

imageOur code has supported client initiated paging using $skip and $top for some time now. However there are situations where the server wants to initiate paging. For example if a naïve (or potentially malicious) client makes a request like this:

GET ~/Products

and you have a lot of products, you are supposed to return all the products (potentially thousands, millions, billions) to the client. This uses a lot of computing resources, and this single request ties up all those resources. This is unfortunate because your client:

  • Might simply be malicious
  • Might be naïve, perhaps it only needed 20 results?
  • Might lockup waiting for all the products to come over the wire.
  • etc.

Thankfully OData has a way to initiate what we call server driven paging, this allows the server to return just a ‘page’ of results + a next link, which tells the client how to retrieve the next page of data. This means naïve clients only get the first page of data and servers have the opportunity to throttle requests from potentially malicious clients because to get all the data multiple requests are required.

This is now really easy to turn on in WebAPI using the Queryable attribute, like this:

[Queryable(ResultLimit=100)]
public IQueryable<Product> Get()
{

}

This code, tells WebAPI to return the first 100 matching results, and then add an OData next-link to the results that when followed will re-enter the same method and continue retrieving the next 100 matching results. This process continues until either the client stops following next-links or there is no more data.

If the strategy [Queryable] uses for Server Driven Paging is not appropriate for your data source, you can also drop down and use ODataQueryOptions and ODataResult<T> directly.

Inheritance:

The OData protocol supports entity type inheritance, so one entity type can derive from another, and often you’ll want to setup inheritance in your service model. To support OData inheritance we have:

  • Improved the ModelBuilder – you can explicitly define inheritance relationships or you can let the ODataConventionModelBuilder infer them for you automatically.
  • Improved our formatters so we can serialize and deserialize derived types.
  • Improved our link generation to include needed casts.
  • and we need to improve our controller action selection so needed casts are routed correctly.
Model Builder API

You can explicitly define inheritance relationships, with either the ODataModelBuilder or the ODataConventionModelBuilder, like this:

// define the Car type that inherits from Vehicle
modelBuilder
.Entity<Car>()
.DerivesFrom<Vehicle>()
.Property(c => c.SeatingCapacity);


// define the Motorcycle type
modelBuilder
.Entity<Motorcycle>()
.DerivesFrom<Vehicle>()
.Property(m => m.CanDoAWheelie);

With inheritance it occasionally makes sense to mark entity types as abstract, which you can do like this:

modelBuilder
.Entity<Vehicle>()
.Abstract()
.HasKey(v => v.ID)
.Property(v => v.WheelCount);

Here we are telling the model builder that Vehicle is an abstract entity type.

When working with derived types you can explicitly define properties and relationship on derived types just as before using EntityTypeConfiguration<TDerivedType>.Property(..), EntityTypeConfiguration<TDerivedType>.HasRequired(…) etc.

Note: In OData every entity type must have a key, either declared or inherited, whether it is abstract or not.

ODataConventionModelBuilder and inheritance

The ODataConventionModelBuilder, which is generally recommended over the ODataModelBuilder, will automatically infer inheritance hierarchies in the absence of explicit configuration. Then once the hierarchy is inferred, it will also infer properties and navigation properties too. This allows you to write less code, focusing on where you deviate from our conventions.
For example this code:

ODataModelBuilder modelBuilder = new ODataConventionModelBuilder();
modelBuilder.EntitySet<Vehicle>(“Vehicle”);

Will look for classes derived from Vehicle and go ahead and create corresponding entity types.

Sometimes you don’t want to have entity types for every .NET type, this is easy to achieve you instruct the model builder to ignore types like this:

builder.IgnoreTypes(typeof(Sportbike));

With this code in place the implicit model discovery will not add an entity type for Sportbike, even though it derives from Vehicle (in this case indirectly i.e. Sportbike –> Motorcycle –> Vehicle).

Known inheritance issues

In this alpha our support for Inheritance is not quite complete. You can create a service with inheritance in it but there are a number of issues we plan to resolve by RTM:

  • Delta<T>doesn’t currently support derived types. This means issuing PATCH requests against instances of a derived type is not currently working.
  • Type filtering in the path is not currently supported. i.e. ~/Vehicles/NS.Motorcycles?$filter=…
  • Type casts in $filter is not currently supported. i.e. ~/Vehicles?$filter=NS.Motorcyles/Manufacturer/Name eq ‘Ducati’
  • Type casts in $orderby is not currently supported. i.e. ~/Vehicles?$filter=Name, NS.Motorcycle/Manufacturer/Name
    OData Actions:

    The other major addition since the august preview is support for OData Actions. Quoting the OData blog:

    “Actions … provide a way to inject behaviors into an otherwise data centric model without confusing the data aspects of the model, while still staying true to the resource oriented underpinnings of OData."

    Adding OData actions support to the WebAPI involves 4 things:

    1. Defining OData Actions in the model builder.
    2. Advertising bindable and available actions in representations of the entity sent to clients.
    3. Deserializing parameters values when people attempt to invoke an Action.
    4. Routing requests to invoke OData Actions to an appropriate controller action.
      Model Builder API

      Firstly we added a new class called ActionConfiguration. You can construct this directly if necessary, but generally you use factory methods that simplify configuring the most common kinds of OData Actions, namely those that bind to an Entity or a collection of Entities. For example:

      ActionConfiguration pullWheelie = builder.Entity<Motorcycle>().Action(“PullWheelie”);
      pullWheelie.Parameter<int>(“ForSeconds”);
      pullWheelie.Returns<bool>();

      defines an Action called ‘PullWheelie’ that binds to a Motorcycle, and that takes an integer parameter “ForSeconds” indicating how long to hold the wheelie, and returns true/false indicating whether the wheelie was successful.

      You can also define an action that binds to a Collection of entities like this:

      ActionConfiguration pullWheelie = builder.Entity<Motorcycle>().Collection.Action(“PullWheelie”);

      Calling this action would result in a collection of Motorcycles all attempting to ‘Pull a Wheelie’ at the same time :)

      There is currently no code in the ODataConventionModelBuilder to infer OData Actions, so actions have to be explicitly added for now. That might change as we formalize our conventions more, but if that happens it won’t be until after the first RTM release.

      Controlling Action links and availability

      When serializing an entity with Actions the OData serializer calls the delegate you pass to ActionConfiguration.HasActionLink(…) for each action. This delegate is responsible for returning a Uri to be embedded in the response sent to the client. The Uri when present tells clients how to invoke the OData Action bound to the current entity. Basically this is hypermedia.

      If you are using the ODataConventionModelBuilder, by default the HasActionLink is automatically configured to generate links in the form: ~/entityset(key)[/cast]/action, or for example:

      ~/Vehicles(1)/Drive

      or to access an action bound to a derived type like Motorcyles:

      ~/Vehicles(1)/Namespace.Motorcycle/PullWheelie

      OData also allows you define actions that are only occasionally bindable. For example you might not be able to ‘Stop’ a Virtual Machine if it has already been stopped. This makes ‘Stop’ a transient action. Use the TransientAction() method, which like Action hangs off EntityTypeConfiguration<T>, to define your transient actions.

      Finally to make your action truly transient you need to pass a delegate to HasActionLink that returns null when the action is in fact not available.

      Handing requests to invoke ODataActions

      The OData specification says that OData Actions must be invoked using a POST request, and that parameters to the action (excluding the binding parameter) are passed in the body of post in JSON format. This example shows an implementation of the PullWheelie action that binds to a Motorcycle:

      [HttpPost]
      public bool PullWheelieOnMotorcycle(int boundId, ODataActionParameters parameters)
      {
      // retrieve the binding parameter, in this case a motorcycle, using the boundId.
      // i.e. POST ~/Vehicles(boundId)/Namespace.Motorcycle/PullWheelie
      Motorcycle motorcycle = _dbContext.Vehicles.OfType<Motorcycle>().SingleOrDefault(m => m.Id == boundId);
      // extract the ForSeconds parameter
      int numberOfSeconds = (int) parameters[“ForSeconds”];

      // left as an exercise to the reader.
      DoWheelie(motorcycle, numberOfSeconds);

      return true;
      }

      As you can see there is a special class here called ODataActionParameters, this is configured to tell the ODataMediaTypeFormatter to read the POST body as an OData Action invocation payload. The ODataActionParameters class is essentially a dictionary from which you can retrieve the parameters used to invoke the action. In this case you can see we are extracting the ‘ForSeconds’ parameter. Finally because the PullWheelie action was configured to return Bool when we defined it, we simply return Bool and the ODataMediaTypeFormatter takes care of the rest.

      The only remaining change is setting up routing to handle Actions, Inheritance, NavigationProperties and all the other OData conventions. Unfortunately this problem is a little too involved for standard WebAPI routing. Which means integrating all these new features is pretty hard.

      We realized this and started fleshing something called System.Web.Http.OData.Futures to help out…

      Integrating everything using System.Web.Http.OData.Futures

      While the core of all these new features are complete, getting them to work together is tricky. That makes checking out the ODataSampleService (part of aspnet.codeplex.com) even more important.

      The sample service itself is starting to look a lot simpler: you don’t need to worry about setting up complicated routes, registering OData formatters etc. In fact all you need to do is call EnableOData(…) on your configuration, passing in your model:

      // Create your configuration (in this case a selfhost one).
      HttpSelfHostConfiguration configuration = new HttpSelfHostConfiguration(_baseAddress);
      configuration.Formatters.Clear();

      // Enable OData
      configuration.EnableOData(GetEdmModel());
      // Create server
      server = new HttpSelfHostServer(configuration);
      // Start listening
      server.OpenAsync().Wait();
      Console.WriteLine("Listening on " + _baseAddress);

      As you can see this is pretty simple.

      For a description of how GetEdmModel() works checkout my earlier post.

      As you can imagine EnableOData(…) is doing quite a bit of magic, it:

      • Registers the ODataMediaTypeFormatter
      • Registers a wild card route, for matching all incoming OData requests
      • Registers OData routes for generating links in responses (these will probably disappear by RTM).
      • Registers custom Controller and Actions selectors that parse the incoming request path and dispatch all well understood OData requests by convention. The Action selector dispatches deeper OData requests (i.e. ~/People(1)/BestFriend/BestFriend) to a ‘catch all method’ called HandleUnmappedRequest(…) which you can override if you want.

      All of this is implemented in System.Web.Http.OData.Futures, which includes:

      • OData Route information.
      • OData specific Controller and Actions selectors – These classes help avoid routing conflicts. They are necessary because WebAPI’s built-in routing is not sophisticated enough to handle OData’s context sensitive routing needs.
      • ODataPathParser and ODataPathSegment – These classes help our custom selectors establish context and route to Controller actions based on conventions.
      • EntitySetController<T> – This class implements the conventions used by our custom selectors, and provides a convenient base class for your controllers when supporting OData.

      System.Web.Http.OData.Futures is currently only at sample quality, but it is basically required to creating OData services today, so we plan on merging it into System.Web.Http.OData for the RTM release.

      OData WebAPI Action Routing Conventions:

      The OData ActionSelector is designed to work best with the EntitySetController<TEntity,TKey> and it relies on a series of routing conventions to dispatch OData requests to Controller Actions. You don’t actually need to use the EntitySetController class, so long as you follow the conventions that the OData ActionSelector uses.

      The conventions currently defined in Futures and used by the sample are:

      image

      These conventions are not complete, in fact by RTM we expect to add a few more, in particular to handle:

      image

      We are also experimenting with the idea that anything that doesn’t match one of these conventions will get routed to:

      image

      As you can see we’ve made a lot of progress, and our OData support is getting more complete and compelling all the time. We still have a number of things to do before RTM, including bringing the ideas from System.Web.Http.OData.Futures into System.Web.Http.OData, finalizing conventions, enabling JSON light, working on performance and fixing bugs etc. That said I hope you’ll agree that things are taking shape nicely?

      As always we are keen to hear what you think, even more so if you've kicked the tires a little!


      •• The WCF Data Services Team described Using Add Service Reference with OData services after installing Windows Phone SDK 8.0 in a 10/31/2012 post:

      imageIn this post, we will talk about how you can continue to consume OData v3 services after installing Windows Phone SDK 8.0.

      imageIf you are a developer who is using WCF Data Services to produce OData v3 services in Visual Studio 2012 and also only installed the new Windows Phone SDK 8.0 but not the OData Client Tools for Windows Phone Apps then you may experience a known issue.

      After installing Windows Phone SDK 8.0 on your system which has Visual Studio 2012, you will be hitting a known issue if you:

      • have Visual Studio 2012 installed AND
      • installed Windows Phone SDK 8.0 AND
      • did not install the OData Client Tools for Windows Phone Apps AND
      • want to add or update a service reference for an OData v3 service

      In this case, Add Service Reference will reference an older version of data services references instead of the version which supports OData v3. For this reason, you will not be able to immediately consume a v3 service. This is because the Windows Phone SDK 8.0 overwrites some settings which guides Add Service Reference to pull in the right references.

      If you hit this issue, please re-run the latest WCF Data Services 5.0 for OData v3 installer. This will restore Add Service Reference for OData v3 services.

      We apologize for this inconvenience.


      •• The WCF Data Services Team explained How to update existing Windows Phone 7 OData applications to work with the new client tools in a 10/31/2012 post:

      imageOData Client Tools for Windows Phone Apps is recently released to add support for consuming OData v3 services. You may already have a Windows Phone 7.1 application that consumes OData and want to update it to work with OData V3 services using the new Windows Phone SDK 8.0 and OData Client Tools for Windows Phone Apps.

      imageIn this short post we will describe how to update your existing Windows Phone 7 OData applications to take advantage of the new OData v3 support in client tools. This post assumes you already installed Windows Phone SDK 8.0 and OData Client Tools for Windows Phone Apps.

      When an existing Windows Phone 7.1 application project is opened in the Windows Phone SDK 8.0, it will be referencing the System.Data.Services.Client.dll which doesn’t exist in the original reference path.

      In this new release of the client tools, we re-architected our client assembly and split the one System.Data.Services.Client.dll assembly into four different assemblies. Therefore, these new assemblies need to be referenced to update the project. Fortunately, this is very trivial since we are using Nuget in this release.

      Going through the following steps will update your existing project to work with the new client tools:

      1. Remove assembly reference to System.Data.Services.Client.dll
      2. Install the OData Client for Windows Phone Apps Nuget Package (Install-Package Microsoft.Data.Services.Client.WindowsPhone from Package Manager Console). This should add the new references and project should compile.

      Now you are ready to enhance your existing application to work with OData v3 services.


      • The WCF Data Services Team reported OData Client Tools for Windows Phone Apps Now Available in a 10/30/2012 post:

      image_thumb8Windows Phone 8 and Windows Phone SDK 8.0 were just announced and we are pleased to announce that OData Client Tools for Windows Phone 8 is also now available from the download center.

      imageOData Client Tools for Windows Phone 8 brings OData V3 support to the Windows Phone platform. This release enables you to consume OData V3 services in your Windows Phone application. More specifically, this exciting release extends the Add Service Reference experience with client-side OData support for developing Windows Phone 7 and Windows Phone 8 applications using Windows Phone SDK 8.0. The tooling adds references that are capable of consuming OData services up to v3.

      How can I get the installer?

      You can download the installer for OData Client Tools for Windows Phone 8 from the download center. This release is not included with Windows Phone SDK 8.0 and needs to be installed separately. This release model will enable us to make release updates and fixes for the client tools and runtime components in a more efficient way.

      If you don’t install the client tools and start using Add Service Reference to consume an OData service you will be prompted with a message such as the below:

      You will need to navigate to the page and click on the OData Client Tools for Windows Phone 8 banner to download and install the client tools.

      What are the prerequisites?

      OData Client Tools for Windows Phone 8 requires Windows Phone SDK 8.0 to be installed.

      In addition, Nuget Package Manager v2.1 is required enable Nuget support for Windows Phone 8 projects. Similar to our other platforms, we are embracing Nuget for the Windows Phone 8 platform as well. If you don’t already have the latest version of the Nuget Package Manager, please make sure to update either from within Visual Studio by navigating to Tools->Extensions and Updates or by downloading and installing from Nuget Package Manager v2.1 download page.

      What is in this release?

      As mentioned at the beginning, this release brings OData V3 support to Windows Phone platform. You can now write Windows Phone 7.1 and Windows Phone 8 applications using Windows Phone SDK 8.0 and take advantage of exciting OData v3 features like:

      • Spatial: You can now take advantage of the spatial support in OData v3 to create location aware OData enabled phone apps
      • Actions: You can now invoke actions defined on an OData V3 service.
      • Any/All Queries: You can now express queries like “are there any customers which have no orders”
      • Vocabularies: You can now consume vocabularies for creating richer experiences on Windows Phone applications
      • Properties Defined on Subtypes: You can now have better inheritance support by consuming models which have properties (primitive, complex and navigation) defined on subtypes of the base type associated with the set.
      • ODataLib/EdmLib: You can now use these lower level libraries on Windows Phone.

      We are very excited with this release and we are looking forward to hearing your thoughts. Let us know what you think and feel free to point us to your OData enabled Windows Phone application.

      You can start the download the Windows Phone SDK 8.0, which runs under 64-bit Windows 8 only, here. From the Overview:

      The Windows Phone SDK 8.0 is a full-featured development environment to use for building apps and games for Windows Phone 8.0 and Windows Phone 7.5. The Windows Phone SDK provides a stand-alone Visual Studio Express 2012 edition for Windows Phone or works as an add-in to Visual Studio 2012 Professional, Premium or Ultimate editions. With the SDK, you can use your existing programming skills and code to build managed or native code apps. In addition, the SDK includes multiple emulators and additional tools for profiling and testing your Windows Phone app under real-world conditions.

      • Avkash Chauhan (@avkashchauhan) reported Windows Phone 8.0 Emulator requires a SLAT Supported Hardware Virtulization on Windows 8 Machine in a 10/30/2012 post:

      imageIf you have installed Windows Phone SDK 8 on a Windows 8 x64 machine which does not have SLAT (Second Level Address Translation) support with hardware virtulization, you will not be able to run the Windows Phone Emulator 8.0 and get the following message:

      This computer does not support hardware virtulization, which means Windows Phone Emulator 8.0 cant run on this PC.

      If your machine is SLAT enabled you sure can check if Hyper-V settings in "Windows Features" as below:

      Or to verify if your CPU supports SLAT you can run the following utility in your Windows 8 machine:

      http://slatstatuscheck.codeplex.com/

      • Dhananjay Kumar (@debug_mode) detailed Getting Started with Windows Phone 8 in a 10/30/2012 post:

      Much awaited Windows Phone 8 SDK got launched on 30th October 2012. And as all we expected there is HTML5 support in Windows Phone 8.

      image

      To get it started navigate here to download Windows Phone 8 SDK and then from the New Downloads section download Windows Phone SDK 8.0. There are two options available to download Web Installer and DVD.

      image

      Download Windows Phone 8 SDK from there. After downloading start installing it.

      image

      After installing you will be asked to restart the system. Click on Restart Now to restart the system

      image

      After successful installation of Windows Phone 8 SDK, you will get confirmation window as following image. Click on launch to launch Visual Studio after installation.

      image

      Now go ahead and create a Windows Phone Application project. You will notice that there is template for Windows Phone HTML5 App. However let us go ahead and create an application using Windows Phone App Template.

      image
      Next you will be asked to choose the OS version. Choose Windows Phone OS 8.0 here.

      image

      Let us go ahead and put a Text Block on the MainPage.xaml.

      image

      In Windows Phone 8 SDK you got 4 types of emulator to run and test the application. Select Emulator WVGA 512MB and run the application in emulator

      image

      On running the application, you will see all new emulator. In further post we will discuss more on new features of emulator.

      image

      So now we are all set to explore more on Windows Phone 8 . In further posts we will go in detail of each new features and API


      <Return to section navigation list>

      Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

      ‡ Richard Seroter (@rseroter, pictured below) posted Interview Series: Four Questions With … Jürgen Willis on 11/2/2012:

      imageGreetings and welcome to the 44th interview in my series of talks with leaders in the “connected technology” space. This month, I reached out to Jürgen Willis who is Group Program Manager for the Windows Azure team at Microsoft with responsibility for Windows Workflow Foundation and the new Workflow Manager (on-prem and in Windows Azure). Jürgen frequently contributes blog posts to the Workflow Team blog, and is well known in the community for his participation in the development of BizTalk Server 2004 and Windows Communication Foundation.

      image222I’ve known Jürgen for years and he’s someone that I really admire for ability to explain technology to any audience. Let’s see how he puts up with my four questions.

      Q: Congrats on releasing the new Workflow Manager 1.0! It seems that after a quiet period, we’re back to have a wide range of Microsoft tools that can solve similar problems. Help me understand some of the cases when I’d use Windows Server AppFabric, and when I’d be bettering off pushing WF services to the Workflow Manager.

      A: Workflow Manager and AppFabric support somewhat different scenarios and have different design goals, much like WorkflowApplication and WorkflowServiceHost in .NET support different scenarios, while leveraging the same WF core.

      WorkflowServiceHost (WFSH) is focused on building workflows that consume WCF SOAP services and are addressable as WCF SOAP services. The scenario focus is on standalone Enterprise apps/workflows that use service-based composition and integration. AppFabric, in turn, focuses on adding management capabilities to IIS-hosted WFSH workflows.

      Workflow Manager 1.0 has as its key scenarios: multi-tenant ISVs and cloud scale (we are running the same technology as an Azure service behind Office 365). From a messaging standpoint, we focused on REST and Service Bus support since that aligns with both our SharePoint integration story, as well as the predominant messaging models in new cloud-based applications. We had to scope the capabilities in this release largely around the SharePoint scenarios, but we’ve already started planning the next set of capabilities/scenarios for Workflow Manager.

      If you’re using AppFabric and its meeting your needs, it makes sense to stick with that (and you should be sure to check out the new 4.5 investments we made in WFSH). If you have a longer project timeline and have scenarios that require the multi-tenant and scaleout characteristics of Workflow Manager, are Azure-focused, require workflow/activity definition management or will primarily use REST and/or Service Bus based messaging, then you may want to evaluate Workflow Manager.

      Q: It seems that today’s software is increasingly built using an aggregation of frameworks/technologies as developers aren’t simply trying to use one technology to do everything. That said, what do you think is that sweet spot for Workflow Foundation in enterprise apps or public web applications? When should I realistically introduce WF into my applications instead of simply coding the (stateful) logic?

      A: I would consider WF in my application if I had one or more of these requirements:

      • Authors of the process logic are not full-time developers. WF provides a great mechanism to provide application extensibility, which allows a broader set of people to extend/author process logic. We have many examples of ISVs who have used WF to provide extensibility to their applications. The rehostable WF designer, combined with custom activities specific to the organization/domain allow for a very tailored experience which provides great productivity to people who are domain experts, but perhaps not developers. We have increasingly seen Enterprises doing similar things, where a central team builds an application that allows various departments to customize their use of the application via the WF tools.
      • The process flow is long running. WF’s ability to automatically persist and reload workflow instances can remove the need to write a lot of tricky plumbing code for supporting long running process logic.
      • Coordination across multiple external systems/services is required. WF makes it easier to write this coordination logic, including async messaging handling, parallel execution, correlation to workflow instances, queued message support, and transactional coordination of inbound/outbound messages with process state.
      • Increased visibility to the process logic is desired. This can be viewed in a couple of ways. The graphical layout makes it much clearer what the process flow is – I’ve had many customers tell me about the value of a developer/implementer being able to review the workflow with the business owner to ensure that the requirements are being met. The second aspect of this is that the workflow tracking data provides pretty thorough data about what’s happening in the process. We have more we’d like to do in terms of surfacing this information via tools, but all the pieces are there for customers to build rich visualizations today.

      For those new to Workflow, we have a number of resources listed here.

      Q: You and I have spoken many times over the years about rules engines and the Microsoft products that love them. It seems that this is still a very fuzzy domain for Microsoft customers and I personally haven’t seen a mass demand for a more sophisticated rules engine from Microsoft. Is that really the case? Have you received a lot of requests for further investment in rules technology? If not, why do you think that is?

      A: We do get the question pretty regularly about further investments in rules engines, beyond our current BizTalk and WF rules engine technology. However, rules engines are the kind of investment that is immensely valuable to a minority of our overall audience; to date, the overall priorities from our customers have been higher in other areas. I do hope that the organization is able to make further investments in this area in the future; I believe there’s a lot of value that we could deliver.

      Q [stupid question]: Halloween is upon us, which means yet another round of trick-or-treating kids wearing tired outfits like princesses, pirates and superheroes. If a creative kid came to my door dressed as a beaver, historically-accurate King Henry VIII, or USB stick, I’d probably throw an extra Snickers in their bag. What Halloween costume(s) would really impress you?

      A: It would be pretty impressive to see some kids doing a Chinese dragon dance

      image_thumb9


      <Return to section navigation list>

      Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

      Chris Klug (@ZeroKoll) claimed Azure Web Sites and WebMatrix is pretty neat! in a 10/29/2012 post:

      imageI guess this post isn’t so much for us “professional” web developers as it is for the hobby developers. Having that said, I can probably think of about a hundred reasons why this would actually be good for me as a pro as well…

      imageMicrosoft has currently an Azure feature called Windows Azure Web Sites in preview. It is probably about to be released pretty soon, or at least I hope so. But I don’t want to get into Windows Azure Web Sites as such. What I want to have a quick chat about, is how extremely easy it is to get up and going with a simple website using Azure Web Sites and WebMatrix.

      WebMatrix as a tool, is basically a small development environment, bundled with a bunch of things like IIS Express so on. It also has a whole heap of predefined templates, making it really easy to get up and running with a new blog or whatever. It also supports node.js and php development, which is quite neat…

      But let’s start by looking at getting an Azure Web Site up and going…

      The first thing you need to do is to log into the Azure management portal, and create a new Web Site. This is done by clicking the “Web Sites” link on the left hand side, and then the big ass plus sign at the bottom right of the screen. This opens up a creation pane thingy, where you can either select “Quick Create” to create an empty site, or “Create With Database” to create a site connected to a new, or existing, database, or last but not least, you can click “From Gallery” to create a site based on a template.

      From gallery is a ridiculously simple way to get a whole application stood up for you within minutes. Among the templates found in the gallery, you have Drupal, BlogEngine.NET, DotNetNuke, phpBB and Composite C1 CMS, which is really awesome. It means that you can have a blog or CMS based application up and going within minutes…

      I will go for “Quick Create” as I will build a custom application using WebMatrix.

      All you need to do, is insert a unique Url, or rather a unique prefix to azurewebsites.net, select a region where you want to host it, and if you have more than one Azure account connected, select what account to use…and then of course click “Create Web Site”

      (You can connect a custom domain, but in that case you have to scale up your site beyond the free limit…)

      image

      And then, a minute later, it says that you web site has been created

      image

      Now that the site is up and running, browsing to it will give you a simple default web page

      image

      Ok…now that we have a web site up and running, it is time to do what I promised in the beginning, and that is to work with it using WebMatrix. This is ridiculously simple to do if you are using IE, if you are in Chrome, it is a bit more complicated as you need to install an extension. Luckily, I have both installed, so I just switch over to IE for this step…

      If you click on the newly created web site in the list of available web sites, you will get a dashboard showing some very empty statistics about your site. But you will also get a WebMatrix button at the bottom

      image

      Clicking that, will open a “pop-up” that asks for your permission to do things. Just say yes. After that, WebMatrix will pop-up for you.

      (This obviously requires you to have WebMatrix installed, which can be done through the Web Platform Installer)

      Once WebMatrix opens, it gives you a screen saying that an empty site was detected. This is pretty neat, cause it now offers you the ability to either pick an open source application from the Application Gallery, or pick a website from the Template Gallery. I don’t really know the difference between this to be honest, but I don’t really care in this case. For this post, I am choosing “No, continue anyway”, which let’s me start with a completely empty site.

      Next, WebMatrix configures a bunch of things for you automatically, before giving you control to do what you want with your site. But the nice thing is that everything is now properly configured for publishing and stuff.

      In my case, I start off by going to the “Settings” area and add a new default page called index.cshtml, which is what my single page will be called.

      Then I go to “Files” (bottom left), and add a new CSHTML file called index.cshtml (no surprise there…). But as you can see, it supports a multitude of different file types, including Classic ASP, which is quite funny…

      Adding a CSHTML page, does not mean that you will be doing ASP.NET MVC. You will get the Razor view engine, but not the MVC part. It is like a mix between classic asp with a Razor view engine, which means it supports layouts and .NET, which classic asp didn’t, but looking at it, it is very similar as you put all your code in the actual page…

      Before adding any markup, I am also deleting the hostingstart.html page, as it is not needed anymore.

      The HTML markup I am adding looks like this

      <!DOCTYPE html>

      <html lang="en">
      <head>
      <meta charset="utf-8" />
      <title>Is It Friday?</title>
      </head>
      <body>
      @if (DateTime.Now.DayOfWeek == DayOfWeek.Friday)
      {
      <h1>It's freaking FRIDAY today! Let the weekend begin!</h1>
      }
      else if (DateTime.Now.DayOfWeek == DayOfWeek.Thursday)
      {
      <h1>No...but it is Friday eve!</h1>
      }
      else
      {
      <h1>Nah...it's just @DateTime.Now.DayOfWeek.ToString()</h1>
      }
      </body>
      </html>

      It is just a simple “Is It Friday” app. You can now wither browse to it locally, with the address available in the “Site” section. Or, you can just publish it to the cloud. Running it locally for debugging is of course a good idea, but I leave that up to you… .

      Clicking “Publish” will do just that. it will find the files that have changed, and then upload those for you after confirming with you that you agree with which files have been modified.

      image

      And as you can see, it automatically includes a gazillion references for you, but you can ignore that for the most part. Just click “Continue”.

      You can follow the progress at the bottom of the screen, and after just a minute or two, you will get a message saying it is done. The message also includes a link to the site, so all you have to do is to click it to see the result.

      That’s all there is to it! Nice and easy! Don’t know if it really warranted its own blog post, but I found it so smooth and cool that I had to do something…

      image_thumb1


      <Return to section navigation list>

      Live Windows Azure Apps, APIs, Tools and Test Harnesses

      ‡ MarketWire reported Urban Airship Unveils Beta Push Messaging Service for Windows 8 and Windows Phone 8, Powered by Windows Azure in a 10/31/2012 press release:

      Urban Airship Signs Alliance Agreement With Microsoft

      imageUrban Airship, a leading provider of high-performance push messaging to mobile devices, today announced its beta push messaging support for Windows 8 and Windows Phone 8 devices with Windows Azure providing the backend cloud infrastructure. Urban Airship has also signed an alliance agreement with Microsoft Corp. to provide push messaging services on Windows Azure.

      Urban Airship's high-performance push messaging service enables businesses to send broadcast notifications as well as highly targeted messages to app users in order to drive engagement, increase retention and deliver exceptional experiences. Urban Airship's APIs, client libraries and documentation make it easy to add push messaging to apps developed for Android, BlackBerry, iOS and Windows Phone 8 and Windows 8. Developers can get started immediately by adding Urban Airship's new single codebase client library to their apps for Windows 8 and Windows Phone 8.

      image222"We are thrilled to work with Windows Azure to offer new experiences for consumers and enterprises across Windows 8 and Windows Phone 8 devices," said Scott Kveton, co-founder and CEO, Urban Airship. "This will offer Microsoft's customers the confidence of working with the global leader with proven scalability and precision-targeting that enterprises require."

      "Urban Airship helps large brands drive engagement through highly targeted push messaging to digital platforms, and we're excited to be working with them to bring their service to Windows 8," said John Richards, senior director of App Marketing for Microsoft Corp.

      About Urban Airship
      Urban Airship is the most globally deployed push messaging service, delivering billions of messages per month with unparalleled speed and scale for leading brands such as CBS Interactive, ESPN, Groupon, shopkick, Viddy, Walgreens and Warner Bros. Urban Airship enables these brands and 65,000 other customers to engage consumers directly on their mobile device home screens with precision-targeted mobile messaging. Complete mobile engagement suites offer easy and effective end-to-end management of the push messaging process from customer and location targeting, to automation and delivery, including message composition, rich landing page creation and analytics to optimize effectiveness. Its investors include Founder's Co-op, Foundry Group, Intel Capital, salesforce.com, True Ventures and Verizon. For more information, visit www.urbanairship.com and follow us on Twitter @urbanairship [link added].


      •• Nathan Totten (@ntotten) and Nick Harris (@cloudnick) posted CloudCover video Episode 93 - Real-World Windows Azure with Mural.ly on 10/31/2012:

      image222In this episode Nick and Nate are joined by Johnny Halife – co-founder and lead developer at Mural.ly – who talks about how he builds, deploys, and scales their application on Windows Azure to handle thousands of users and millions of requests per day. Johnny shows off real code and discusses how they use Windows Azure Web Sites and continuous integration to deploy into production hundreds of times per week. Finally, Johnny talks about his journey to Windows Azure and how it stacks up against the competition.

      Be sure to sign up for a free Mural.ly account!

      Follow @mural_ly
      Follow @johnnyhalife
      Follow @CloudCoverShow
      Follow @cloudnick
      Follow @ntotten


      •• Brian Harry reported Team Foundation Service RTM on 10/31/2012:

      Today, we announced that the Team Foundation Service has released. Read more about it on the service web site here: http://tfs.visualstudio.com/en-us/home/news/.

      I wanted to provide a bit of my own commentary (beyond what I put on the web site news)…

      Our original plan was to hold RTM until all the billing infrastructure is in place. Over the past several months, we’ve seen dramatic increases in people signing up. At the same time, I’ve seen two questions come up repeatedly:

      1. Will my data be preserved when the service RTMs? I’ve assured people that it will. This is a cloud service. It will continue to advance “forever”. However, it continues to be a source of anxiety.
      2. How much will the service cost? I’ve seen countless people play with the service and get to the point that they want to use it “for real” but they can’t with an open ended question on cost. We’ve made it clear that the data in the service is yours and we’ll ensure you have a way to get your data if the pricing is unappealing. However, it has continued to be an issue.

      Over the past couple of months, we’ve been asking ourselves what we could do to make these issues clearer and eliminate the anxiety for people. At the same time, we haven’t invented a time machine yet and we don’t actually have all of the billing infrastructure in place. So we worked to walk a fine line. So, this RTM announcement has the following key components:

      1. The service has RTMed. That means we are committed to this service for the long term; it is ready for production use; your data is safe and will be carried forward indefinitely. Our support organization is trained and ready to support customers with any issues.
      2. We will provide a free level of service up to 5 users in an account. This includes all of the features currently on the service (including “Premium” features like Agile Project Management). For now build has no cap, although we expect to introduce one at some point (probably a small # of build hours per month for free accounts). Because of this, the build service is still marked as “preview” to indicate that we have not settled on the precise usage allowance yet.
      3. We will provide the service to MSDN subscribers with Premium, Ultimate and TestPro subscriptions. Users with those subscriptions can participate in teams/projects of unlimited size and take advantage of all the features of the service. There will eventually also be a cap on build for this scenario as well – but presumably much larger than the free accounts. We can’t, of course, allow unlimited compute usage for a fixed cost under any level of subscription.
      4. We will provide a paid subscription option next year for teams that don’t fall into category 2 or 3. We have not disclosed any details about that paid subscription at this point – expect to hear more next year.
      5. There is no enforcement of any of these rules today. All usage is free but we’ve provided some information about what rules we intend to enforce and will introduce that enforcement with billing next year.

      The new url for Team Foundation Service is http://tfs.visualstudio.com. The old url, http://tfspreview.com, will also continue to work for some months but will eventually be retired.

      At the same time, we expect the service, as is stands today is not for everyone. There are many gaps between what is on the service and what you can do with our on-premises product. Some examples include:

      • You must use one of the built in process templates (Scrum, Agile, CMMI). You are not currently able to customize the process template.
      • The service does not yet support integration with our other enterprise services – Sharepoint, Project Server, System Center, etc.
      • You must login with a LiveID – we don’t yet support Active Directory federation.
      • Reporting is extremely limited. We provide some stock reports – like burn down and cumulative flow but the extensive reporting in the on-premises product is not yet supported on the service.

      image222These are a few examples of the kinds of benefits the on-premises solution has over the service today. Over time, all of these gaps will be addressed and Team Foundation Service will enable virtually everything you can do with our on-premises product. Until then, you will want to think carefully about your requirements before you choose between the cloud and on-prem solutions. For now, I think the cloud is a great solution for small to medium sized teams, particularly those that are distributed or cross organization. You’ll have to weigh the trade-offs yourself.

      We’ve been using the service for some of our own development projects for over a year now and have been extremely happy with it. I hope you’ll give it a try too.

      So, within our constraints, we’ve tried to remove as much ambiguity as we can – the service is RTM’d and your data will be safe; small teams get free access; teams with MSDN get it as part of their subscription. We’ve not provided additional pricing clarity for the medium sized teams but it’s no less clarity than we had before.

      I’m very happy to be taking this next step in the journey and looking forward to enabling even more people to collaborate easily through the cloud.


      • Bruno Terkaly (@brunoterkaly) continued his series with Step 2–Augmented Reality, Windows 8, and Cloud Computing–How to implement with real code on 10/30/2012:

      imageThe purpose of this post is to create the starting project for the web service backend.

      Later, we will create the augmented reality Windows 8 client.

      001

      1. image222Let's start with the cloud back-end.
      2. The mobile application will send the Azure Cloud Application GPS coordinates.
      3. The Cloud Application will use the coordinates to do a lookup at Google.com or Bing.com to find more information about those coordinates.
        • For example, it can lookup the city and neighborhood.
      4. The Cloud application will return this data back to the client for display on the screen, overlaying the photo.

      Notice that we are selecting an MVC Web API application.
      002

      image

      1. It is important to understand WHY I am choosing an MVC Web API project type.
      2. There are basically two options to build the web services project:
        • Use Windows Communication Foundation - WCF -, or
        • Use ASP.NET Web API, which is included with MVC version 4.
      3. Exposing services via WCF is also easy to do.
      4. But for this specific scenario we will use the newer, more modern approach that ASP.NET Web API brings to the table, truly embracing HTTP concepts (URIs and verbs).
      5. The MVC Framework allows us to create services that use more easily use some HTTP features, such as request/response headers, along with hypermedia constructs.
      6. Both projects can be tested on a single machine during development.
      7. Watch the following 1.5 minute video to see how to create a cloud-based MVC4 application.

      Understanding Role Types for a Windows Azure Project - What the options mean
      003
      1. ASP.NET Web Role
        • This is old school web forms.
        • This is a very well-established pattern for creating modern web applications.
        • There is a well established eco-system of tools, documentation and resources.
        • But it doesn't lend itself to unit testing.
        • It doesn't have a clear separation of code-behind and markup.
        • Many developers have moved on to ASP.NET MVC, described next.
      2. ASP.NET MVC3/4 Web Role (what we choose in these posts)
        • This is the popular and powerful framework from Microsoft.
        • It is a web application framework that implements the model-view-controller (MVC) pattern.
        • It also provides the ability throu the Web API to create REST-based web services. -We will choose this option.
        • Can do unit testing and has clear separation of concerns.
      3. WCF Service Web Role
        • The Windows Communication Foundation (or WCF) is a runtime and a set of APIs in the .NET Framework for building connected, service-oriented applications.
        • It has been used heavily for SOA-type applications, but has given ground to the Web API as REST-based architectures grew in popularity.
        • This is a perfectly acceptable solution, and the only solution, if you need to support many advanced Web services (WS) standards such as WS-Addressing, WS-ReliableMessaging and WS-Security.
        • With subsequent versions of the NET Framework 4.0, WCF also provides RSS Syndication Services, WS-Discovery, routing and better support for REST services.
      4. Worker Role
        • So far we have been taking about web roles. Web roles, by default, include Internet Information Server (IIS) inside of a Windows Server OS inside a VM, running on one physical core.
        • Worker roles are the same thing as web roles, except there is no IIS.
        • This allows cloud-based applications to run background processes, typically reading messages placed in queues by web roles.
        • Worker roles can leverage Windows Azure Storage, just like web roles.
      5. Cache Worker Role
        • Windows Azure Caching supports the ability to host Caching services on Windows Azure roles.
        • In this model, the cache can join memory resources to form a cache cluster.
        • This private cache cluster is available only to the roles within the same deployment.
        • Your application is the only consumer of the cache.
        • There are no predefined quotas or throttling.
        • Physical capacity (memory and other physical resources) is the only limiting factor.
        • Other features include named caches, regions, tagging, high availability, local cache with notifications, and greater API symmetry with Microsoft AppFabric 1.1 for Windows Server.

      Understanding VisualController is a key starting point
      004
      1. VisualController.cs is a file we need to modify.
      2. It contains the code that will execute when the Windows 8 client submits an HTTP request against the web service.
        • This HTTP request will include GPS data.
      3. This is where we will add some of our code to return the JSON data required by the Windows 8 application.
      4. The ValuesController class is generated by Visual Studio, and it inherits from ApiController, which returns data that is serialized and sent to the client, automatically in JSON format.
      5. We will test this file before modifying it. We need to learn how to call methods inside of VisualController.cs

      public class ValuesController : ApiController
      {
      // GET api/values
      public IEnumerable<string> Get()
      {
      return new string[] { "value1", "value2" };
      }
      // GET api/values/5
      public string Get(int id)
      {
      return "value";
      }
      // POST api/values
      public void Post(string value)
      {
      }
      // PUT api/values/5
      public void Put(int id, string value)
      {
      }
      // DELETE api/values/5
      public void Delete(int id)
      {
      }
      }

      image

      Note the video which shows us exactly how to call into a web service method
      006

      1. Let us now use a browser to test our web service.
      2. Any browser can be used for this purpose.
      3. To test the Web Service from a browser, perform the following steps:
      4. In Visual Studio, click on the Debug menu and choose Start Debugging.
      5. This will start the various emulators to allow us to run our application on our local dev machine.
        • There is a Compute Emulator
          • Runs our MVC WebAPI app
        • There is a Storage Emulator
          • We are not using storage today for this app
      6. You should see the screen above.
        • Notice the web address of http://128.0.0.1:81
          • That is port 81 on my machine (yours may differ)
      7. We can call the get() method by simply issuing the following url
        • http://127.0.0.1:81/api/values
          • This will trigger the ASP.NET Web API application to send the two strings:
            • value1 and value2
      8. Watch the video that illustrates this application running and returning value1 and value2
      9. We just ran a simple intro sample before we do the real work.
      10. Now should starting to see that we can call into the cloud application quite easily. We just need to use a URL from a Windows 8 Application.
        • We can pass parameters and receive data back.
          • The data we pass will be GPS coordinates.
          • The data we get back will be location information, such as neighborhood, city, etc.
      11. Conclusion
        1. We have successfully tested an MVC Web API based cloud project.
        2. The next step is to enhance it to call another web service to get location information based on GPS coordinates
          1. This is Step 3
        3. Question
          1. Should I keep doing quick videos? Are they helpful?
          2. No voice yet. Pretty self-explanatory what I'm doing.
          3. Comments Welcome.

      BusinessWire report EVault Signs Alliance Agreement with Microsoft to Offer EVault Cloud-Connected Backup and Recovery Services on Windows Azure in a 10/30/2012 press release:

      imageEVault, Inc., a Seagate Company (NASDAQ:STX), today announced the EVault® backup and recovery platform will now take advantage of the Windows Azure cloud platform. In conjunction with this initiative, EVault will be expanding development, sales and marketing collaboration around Windows Azure. EVault Endpoint Protection (EVault EP), a cloud-connected backup, recovery and data security solution for endpoint devices, is available on Windows Azure. EVault EP is targeted at enterprises, as well as midmarket businesses.

      EVault Endpoint Protection automates laptop backup recovery and provides secure capabilities — including local file encryption, port access control and, if a laptop is lost or stolen, remote data deletion and device tracing — to help control valuable data spread across mobile workforces. EVault EP enables organizations to secure data on every PC and laptop, ensuring critical corporate information is safe when a device disappears. The endpoint data is automatically backed up to the cloud and can easily be recovered by users themselves. EVault EP offers policy-based, centralized administration and self-service management to help IT attain that perfect balance of corporate control and end user independence.

      “Businesses are looking for flexible, cost-effective data backup and recovery solutions. Our expanded relationship with Microsoft represents the increased demand for more integrated and simple to use cloud-connected storage services that satisfy an organization's unique requirements,” said Terry Cunningham, president and general manager, EVault. “As we extend our footprint around the world, we see great value in taking advantage of Windows Azure to help our customers maintain business continuity and ensure their data needs are met anytime and anywhere.”

      “Windows Azure allows solution providers like EVault to give their customers virtually unlimited capacity and flexibility for backup and recovery to help customers mitigate a wide range of information risks,” said Kim Akers, general manager, Developer and Platform Evangelism, Microsoft.

      Windows Azure is Microsoft’s cloud platform, helping developers build the next generation of applications that will span from the cloud to the enterprise data center. The platform combines cloud-based developer capabilities with storage, computational and networking infrastructure services, all hosted on Microsoft’s servers.

      About EVault

      More than 38,000 midmarket companies rely on EVault cloud-connected backup and recovery services. Delivered by a team of data recovery experts and using the very best cloud-connected technology, EVault backup solutions seamlessly integrate on-premise and online backup data protection for fast, local data access and ensured cloud disaster recovery. Optimized for distributed environments and backed by an ironclad cloud, EVault technology also powers the offerings of cloud services providers, data centers, telcos, ISVs, and many others. EVault is a Seagate Company.

      Copyright 2012 EVault, Inc. All rights reserved. Seagate, Seagate Technology and the Wave logo are registered trademarks of Seagate Technology LLC in the United States and/or other countries. EVault, EVault Endpoint Protection and Cloud-Connected are either registered trademarks or trademarks of EVault, Inc. in the United States and/or other countries.


      Bruno Terkaly (@brunoterkaly) posted Step 1–Augmented Reality, Windows 8, and Cloud Computing–How to implement with real code on 10/29/2012:

      Welcome to Augmented Reality - The Future is Now
      image

      1. imageAugmented reality is a very promising technology. In some ways it is better than virtual reality. Augmented reality has the advantage of including the real-life image into the experience. Virtual reality is 100% simulated and therefore has different applications.
        • Nearly everyone enjoys an improved perception of reality at any given moment (unless you are living in denial, of course).
        • For example, imagine overlaying the location information as you take a photo. You can choose to overlay information about that specific location directly on top of the image you are viewing. That is what we are building as these posts unfold.
      2. Developers today can take advantage of this capability today and create apps that are compelling and useful. There is a market opportunity here.
      3. Augmented Reality applications offer great value to a variety of users.
      4. I will create a series of posts. These posts will:
        • Discuss what types of work you should do on the device itself and in the cloud
        • Implement a simple proof of concept on how you might implement such a system.
        • Teach you about the role of cloud computing for augmented reality.
          • As an example, the cloud plays a crucial role processing sensor information generated by the device (phone, tablet, etc).
          • I will show you how a cloud-based application can receive GPS / location information from a mobile device and process that location information by doing some lookups on other web services or websites.
          • This approach makes sense because the cloud can perform the needed tasks at high scale and speed.
      5. We will focus here with Windows 8 devices. Other form factors include:

      Been there, done that!
      002
      1. Many of us are already enjoying Augmented Reality today
        • Figure 1 depicts the first down line in professional football
      2. In virtual reality, time, physical laws and material properties may no longer be thought of as true, in contrast to the real-world environment.
      3. There are many applications for augmented reality
        • Tourism and sightseeing
          • Add historic event to view
        • Architecture
          • Simulate planned construction projects
        • Military
          • In combat AR provide useful information to the soldiers on where they are, how many enemies are surrounding them and it can spot an enemy that the soldier might not be looking at. AR can be a third eye for the soldier indicating him if there is someone on his back.
        • Use your imagination – the possibilities are quite significant (medical, manufacturing, photography, teaching, exploring)

      Example Scenario that I will code up
      image

      Step 1

      Tablet has a sensor. The sensor in this case is GPS. It sends the GPS coordinates to a cloud application running in Windows Azure. This is done with a simple web service call from a Windows 8 application.

      Step 2

      The Windows Azure cloud application extracts the GPS coordinates send by the Windows 8 application.

      Step 3

      The Windows Azure cloud application makes a call into any number of web services. I will demonstrate a call into Google maps. I could have easily called into Bing's mapping API. My point is show interoperability with the rest of the open web.

      Step 4

      The Google mapping API or Bing mapping API then looks up information about the GPS coordinates, such as street, city, neighborhood, etc.

      Step 5

      The Windows Azure cloud application then packages up the results to send back to the Windows 8 client application.

      Step 6

      The Windows 8 application receives the coordinates and displays them.

      Step 7

      The street, city, neighborhood information gets overlaid on top of the live camera view, thus fulfilling the promise of augmented reality.


      It's all about the back-end
      004
      1. The cloud back-end supporting augmented reality applications is essential.
      2. It can provide a variety of resources not available to the client application front-end.
        • There may be a big data store to support augmented reality applications. Azure has an excellent storage / scaling offering.
        • Data may often originate from a relational database. SQL Database is an offering that lowers database costs and simplifies provisioning and deployment of databases.
        • Storage may be in various forms, such as video, text, graphics. Azure supports tables, blobs, queues for such purposes.
        • Your application may be global in scope, requiring you to manage traffic on a global basis, providing the least latency and the best coverage
        • Caching may be needed to minimize latency for video and images, even application level programming objects
        • A messaging infrastructure may be needed to allow asynchronous communication between client application and cloud. Simple Storage Queues are available and for more sophisticated examples you can use Service Bus Queues.
        • Identity management might be required to authenticate users from Active Directory, Google, Facebook, Yahoo and Live.
        • Media services is useful for uploading video and adding watermarks and custom encoding of content
        • CDNs make it convenient to support users who need minimized latency while accessing graphics and videos. CDNs also support streaming video content, that could be overlayed on top of a camera view.
        • Networking supports the ability to host Virtual Private Networks (VPN) supporting tight integration between the cloud and on-premises resources.
      3. The next post will begin by creating a cloud application to support augmented reality application using Windows 8.

      Jay Connor (@martindotnet) described Agile path ways into the Azure Universe – Configuring the Azure Emulator to work as part of our specification fixtures in a 10/29/2012 post:

      Reader Notes

      This article is pitched at a highly technical audience who are already working with Azure , StoryQ and potentially Selenium / Web Driver. This article primarily builds on a previous Article we wrote in this series, in which we explain all the frameworks listed above. If they are unfamiliar to you we would suggest reading through this article first:

      Agile path ways into the Azure Universe – Access Control Service [ACS] [http://blog.elastacloud.com/2012/09/23/agile-path-ways-into-the-azure-universe-access-control-service-acs/]

      For those who are completely new to test driven concepts , we might also suggest reading through the following article as an overview to some of the concepts presented in this series.

      Step By Step Guides -Getting started with White Box Behaviour Driven Development [http://blog.elastacloud.com/2012/08/21/step-by-step-guides-getting-started-with-specification-driven-development-sdd/]

      Introduction

      In this article we will focus on building a base class which will allow the consumer to produce atomic, and repeatable Microsoft Azure based tests which can be run on the local machine.

      The proposition is that, given a correctly installed machine with the right set of user permissions and configuration we can check out fresh source from our source control repository and execute a set of tests to ensure the source is healthy. We can then add to these tests and drive out further functionality safely in our local environment prior to publication to the Microsoft Azure.

      The one caveat to this proposition is that due to the nature of the Microsoft Azure Cloud based services, there is only so much we can do in our local environment before we need to provide our tests with a connection to Azure assets (such as ACS [Access Control Service] , Service Bus). It should be noted that Various design patterns outside the scope of this article can substitute for some of these elements and provide some fidelity with the live environment. The decision on which route to take on these issues is project specific and will be covered in further articles in coming months.

      Tools

      Our own development setup is as follows:

      We find that the above combination of software packages makes for an exceptional development environment. Windows 8 is by far the most productive Operating System we have used across any hardware stack. Jet Brains Resharper has become an indispensable tool, without which Visual Studio feels highly limited. NUnit is our preferred testing framework, however you could use MBUnit or XUnit. For those who must stick with a pure Microsoft ALM experience you could also use MSTest.

      Azure Emulator

      The Microsoft Azure toolset includes the Azure Emulator , this tool attempts to offer a semi faithful experience in the local development environment of a deployed application scenario on windows Azure. This is achieved by the emulation of Storage and Compute on the local system. Unfortunately, due to the connected nature of the Azure platform in particular the service bus element , the Emulators ability is some what limited. In the Test Driven world a number of these limitations can be worked around by running in a semi-connected mode (where your tests still have a dependency on Microsoft Azure and a requirement to be connected to the internet ) for the essentials that cannot be emulated locally.

      With forward thinking , good design and Mocking / Faking frameworks it is possible to stimulate the behaviour of the Microsoft Azure connected elements. In this scenario every decision is a compromise and there is no Right or Wrong answer, just the right answer at that time for that project and that team.

      Even with the above limitations the Emulator is a powerful development tool. It can work with either a local install of IIS or IIS express. In the following example we will pair the emulator with IISExpress. We firmly believe in reducing the number of statically configured elements a developer has to have to run a fresh check out from source control of a given code base.

      Task 1 – Configure the emulator arguments in the app.config file

      The first task is to set up some configuration entry’s to allow the framework to run the emulator in a process , these arguments define:

      • Path to the Azure Emulator
      • Path to the application package directory
      • Path to the application service configuration file

      The first step is to add a app.config file to our test project

      SNAGHTML2314b523

      Note – a relative root can be configured for the CSX and the Service Configuration file , in this example to keep things explicit we have not done this.

      [csrun]

        The first argument we need to configure is the path to the emulator, on our machine using SDK 1.7 this is configured as follows :

      image

      [csx]

      The second argument we need to configure is the path to the package, for our solution this looks like:

      image

      [ServiceConfiguration.cscfg]

      The third argument we need to set up is the path to the services configuration file. For our solution this looks like this:

      image

      [Use IISExpress switch]

      The final argument we need is to tell the emulator to use IISExpress as its web server:

      image

      The final Configuration group:

      image

      Task 2 – We build the process command line

      Now we have the argument information captured in the application configuration file we need to build this information into an argument string. We have done this in our Test / Specification base:

      image

      In the TestFixtureSetup [called by child test classes]

      • Declare the arguments as variables
      • We assign the values from the applications configuration file to our new variables
      • We then build our argument string and assign it to _computeArgs
      Task 3 – We setup the emulator and execute it in a process

      Now we have all the information we need to pass to our process to execute the emulator; our next stage is to start a process and host the emulator using the arguments we have just defined.

      image

      The code is relativity trivial

      • Spin up a process inside a using block
      • Pass in the emulator arguments
      • Wait for the process to finish
      • Report on the output
      • The Should().Be() is making use of Fluent Assertions http://fluentassertions.codeplex.com/
      Task 4 – Add code to our roles to make them publish a azure package when they build

        Since Azure SDK 1.4 , Role projects have not automatically created package contents for azure. We need this to happen so that we can use a small code fragment which we add to the azure project file.

        image

        Reference Posts

        http://andrewmatthewthompson.blogspot.co.uk/2011/12/deploying-packages-to-azure-compute.html

        Task 5 – We now execute our tests

        Now with the emulator + iisexpress running we are free to execute our tests / specifications from our child test / specifications fixtures.

        Step 1 the Emulator starts

        SNAGHTML275df17e

        Step 2 IISExpress hosts the site

        image

        Step 3 [Optional] Selenium + web driver open browser and run tests

        SNAGHTML275f644c

        image

        Task 6 – Shutting down the emulator

        The emulator and the service can be quite slow in ending; it is possible to use the arguments in the following article to remove the package and close the emulator down. However we have encountered issues with this so instead we prefer to kill the process. We suggest you find out which one works of these approaches works best for your situation. Reference Article http://msdn.microsoft.com/en-us/library/windowsazure/gg433001.aspx Our code

      image

        image

        Points of note

        You must configure azure to use IISExpress at a project level The emulator does not like being shut down manually half way through a test run. If you have to do this make sure you end the service process also. This is an advanced technique with a number of moving parts, as such it is advised that you should pactise it on a piece of spike code before using it on a production project.
        This technique is limited by the constraints of the SDK and the Emulator

        Recap –What have we just done
        • We have configured the information required by the process and the emulator in an application configuration file
        • We have run up a process which starts the emulator + iisexpress
        • We have configured publish for emulator in the azure config
        • We have run our tests / specifications against the azure emulator
        • We have shutdown both iisexpress and the emulator
        • We are now green and complete.
        Conclusion

        This has been quite an advanced topic , to define these steps has been a journey of some research and experimentation that we hope to save the reader.

        The purpose of this technique is to try to empower the principal that a developer should be able to simply check out source on a standard development configured machine and the tests should just run. They should be atomic , include their own setup and teardown logic and when they are finished they should leave no foot print. There is no reason we should break this principal just because we are coding against the cloud.

        It is also important that tests are repeatable and have no side effects. Again the use of techniques such as those demonstrated in this short article help to empower this principal.


        Business Wire reported Numerix Extends Its Windows Azure Solutions to Meet Financial Services Industry Needs for Cross-asset Valuation and Risk Management in a 10/28/2012 press release:

        imageNumerix (www.numerix.com), a leading provider of cross-asset analytics for derivatives valuations and risk management and a Microsoft Global Alliance Partner, today announced plans to enable additional Numerix solutions for use with Microsoft’s cloud platform, Windows Azure.

        image222According to Steven R. O’Hanlon, President and COO of Numerix, “Evolving derivatives regulations and unpredictable market conditions have fueled demand for more frequent, granular and portfolio-wide valuations and risk analytics. In order to process these calculations for thousands of financial instruments in near real-time, access to on-demand, scalable compute power has become a must-have in today’s financial services and insurance firms.”

        Numerix currently offers its CrossAsset XL and Portfolio solutions powered by Windows Azure and Windows HPC Server. These solutions enable users to burst Excel calculations to the cloud and distribute them across a grid or a combination of both private and public cloud compute environments. This allows business agility, reduces capital expenditure, and quickly scales to meet the most compute intensive pricing and calculation demands. Numerix will build on this proven success and extend its use of Windows Azure to include Numerix Leading Hedge, an Asset and Liability Management solution for Variable Annuities and Equity Index Annuities, as well as forthcoming new releases of Numerix CrossAsset.

        “Our alliance with Microsoft underscores our commitment to meet customer demand with Windows Azure-enabled solutions,” O’Hanlon continued. “Our ability to burst pricing and risk calculations to Windows Azure is revolutionizing the ways firms can leverage the cloud to manage risk across their diverse portfolios. Looking ahead, cloud enablement will be integral to our new product development efforts.”

        “Industry partners like Numerix are essential to widespread adoption of Azure, as their solutions exemplify how traders, risk managers and life insurers can securely perform rapid and consistent risk calculations in the cloud,” said Kim Akers, General Manager, Developer and Platform Evangelism, Microsoft. “We are pleased with Numerix’s commitment to enable leading financial services and insurance firms to confidently tap the power of private, public and hybrid cloud computing.”

        “Providing business agility and lowering costs are essential in today’s financial industry,” said Karen Cone, General Manager, Worldwide Financial Services at Microsoft Corporation. “Numerix is a leader in providing analytics and risk management to the market, and we are excited that they are continuing their investment in Microsoft technology.”

        Earlier this year Microsoft recognized Numerix as ISV/Software Solutions Industry Partner of the Year for its success developing and implementing Microsoft-enabled software solutions for its customers.

        About Numerix

        Numerix is the award winning, leading independent analytics institution providing cross-asset solutions for structuring, pre-trade price discovery, trade capture, valuation and portfolio management of derivatives and structured products. Since its inception in 1996, over 700 clients and 75 partners across more than 25 countries have come to rely on Numerix analytics for speed and accuracy in valuing and managing the most sophisticated financial instruments. With offices in New York, London, Paris, Frankfurt, Milan, Stockholm, Tokyo, Hong Kong, Singapore, Dubai, South Korea, India and Australia, Numerix brings together unparalleled expertise across all asset classes and engineering disciplines. For more information please visit www.numerix.com.

        image_thumb22


        <Return to section navigation list>

        Visual Studio LightSwitch and Entity Framework 4.1+

        ‡‡ Beth Massi (@bethmassi) posted LightSwitch Community & Content Rollup–October 2012 on 11/1/2012:

        imageLast year I started posting a rollup of interesting community happenings, content, samples and extensions popping up around Visual Studio LightSwitch. If you missed those rollups you can check them all out here: LightSwitch Community & Content Rollups.

        imageOctober was relatively quiet on the LightSwitch front personally for me. I went on a week long vacation to Cabo San Lucas for my birthday and I turned all electronic devices off! That’s what I call a real vacation :-) I did speak at a couple conferences though and we have a lot of events coming up, check it out.

        Events & Conferences

        DSC03187I presented at a couple conferences in the beginning of the month, one all the way in Sofia Bulgaria for DevReach and then again at the Silicon Valley Code Camp. I presented two LightSwitch sessions at each of these, one demonstrating the HTML Client Preview for building business apps for Mobile devices and another on building OData services and deploying them to Azure. I have to say, I think people are very excited about the HTML client! If you missed my trip report you can read it here as well as download the slides I presented:

        Trip Report–DevReach Bulgaria & Silicon Valley Code Camp

        5b852d77-6700-43a6-80de-ee23832cf99a[1]John Stallo (Lead PM for the LightSwitch team) also has a session at //BUILD where he’ll show off some more goodies with the LightSwitch HTML client. The recording will be available afterwards so be sure to check it out. If you haven’t been paying attention this week there have already been some awesome sessions & keynotes on how to build beautiful apps for Windows 8 and Windows 8 Phone. Just head to: http://www.buildwindows.com/ to watch live.

        Lots of LightSwitch Events Coming Up

        We’ve got a lot more events coming up where me and the team will be speaking on LightSwitch.

        As part of Richard Campbell & Carl Franklin’s .NET Rocks Road Trip, I’ll be the guest star in San Diego on November 27th. Carl & Richard rented a big 37' RV and have been travelling all over the US & Canada for the launch of Visual Studio 2012. At each stop they record a live .NET Rocks! show with a guest star. Following that, they each do a presentation around building modern applications on the Windows platform.

        Visual Studio 2012 Launch Road Trip
        Nov 27th @ 6pm – 16620 West Bernardo Drive, San Diego, CA (REGISTER HERE)

        I’ll also be speaking in San Diego next week at the .NET User Group there. Then I’ll be touring eastern Canada in December (don’t ask me how a little Italian is going to survive the cold Canadian weather). Hope to see you at one of these events!

        Many thanks to Jean Rene-Roy for setting up the Canadian tour and the full day workshop before DevTeach this year!

        More Notable Content this Month

        Extensions released this month (see over 100 of them here!):

        • Signature Controls for LightSwitch (Jason Williams, Centrolutions)- LightSwitch Control Extension that allows the user to use a mouse, touch (finger), or stylus to sign their name or draw a picture. Great for membership databases, document databases, and pretty much any other application that requires a signature to be produced.

        Samples (see all 91 of them here):

        Team Articles:

        Community Articles:

        Huge shout out again to Paul van Bladel for pumping out so many articles two months in a row! Michael Washington from LightSwitchHelpWebsite.com assures me he will be pumping out more articles soon.. no pressure Michael! ;-) Thanks to all of you who share your knowledge with the community for free, whether that’s blogs, forums, speaking, etc.

        Top Forum Answerers

        I thought I’d start a section on the top forum contributors to recognize all the help these folks give everyday to answering questions in the LightSwitch forums on MSDN. These folks deserve as much credit as our bloggers, as they are also helping make the LightSwitch community a better place.

        Huge shout out to Yann Duran who consistently provides help in our General forum!

        Top 5 forum answerers in October:

        image

        Keep up the great work guys!

        LightSwitch Team Community Sites

        Become a fan of Visual Studio LightSwitch on Facebook. Have fun and interact with us on our wall. Check out the cool stories and resources. Here are some other places you can find the LightSwitch team:

        LightSwitch MSDN Forums
        LightSwitch Developer Center
        LightSwitch Team Blog
        LightSwitch on Twitter (@VSLightSwitch, #VS2012 #LightSwitch)

        Enjoy!


        •• Rowan Miller reported EF6 Alpha 1 Available on NuGet in a 10/30/2012 post:

        A couple of months ago we released the RTM of EF5. Today we are pleased to announce the first alpha of EF6. EF6 is being developed in an open source code base on CodePlex, see our open source announcement for more details.

        We Want Your Feedback

        You can help us make EF6 a great release by providing feedback and suggestions. You can provide feedback by commenting on the feature specifications or starting a discussion on our CodePlex site.

        Support

        This is a preview of features that will be available in future releases and is designed to allow you to provide feedback on the design of these features. It is not intended or licensed for use in production.

        If you need assistance using the new features, please post questions on Stack Overflow using the entity-framework tag.

        Getting Started with Alpha 1

        The Get It page provides instructions for installing the latest pre-release version of Entity Framework.

        Note: In some cases you may need to update your EF5 code to work with EF6. See Updating Applications to use EF6 for more details.

        Note: Alpha 1 is a very early preview of EF6. The APIs and functionality included in Alpha 1 are likely to change significantly prior to the final release of EF6.

        EF6 Alpha includes the following new features and changes.

        • Async Query and Save - EF6 now supports the task-based asynchronous patterns that were introduced in .NET 4.5. We've put together a walkthrough that demonstrates this new feature. You can also view the feature specification on our CodePlex site for more detailed information.
        • Custom Code First Conventions - You can now write custom conventions for Code First to help avoid repetitive configuration. We provide a simple API for lightweight conventions as well as some more complex building blocks to allow you to author more complicated conventions. There is a walkthough that covers both of these options and a feature specification on our CodePlex site.
        • Multi-Tenant Migrations - In previous versions of EF you were limited to one Code First model per database when using Migrations, this limitation is now removed. If you want to know more about how we enabled this, check out the feature specification on CodePlex.
        • Configurable Migrations History Table - Some database providers require the appropriate data types etc. to be specified for the Migrations History table to work correctly. The feature specification provides details about how to do this in EF6.
        • Code-Based Configuration - Configuration has traditionally been specified in a config file, EF6 also gives you the option of performing configuration in code. We've put together an overview with some examples and there is a feature specification with more details.
        • Dependency Resolution - EF now supports the Service Locator pattern and we've factored out some pieces of functionality that can be replaced with custom implementations. The feature specification provides details about this pattern, and we've put together a list of services that can be injected.
        • Updated Provider Model - In previous versions of EF some of the core components were a part of the .NET Framework. In EF6 we've moved all these components into our NuGet package, allowing us to develop and deliver more features in a shorter time frame. This move required some changes to our provider model. We've created a document that details the changes required by providers to support EF6, and provided a list of providers that we are aware of with EF6 support.
        • Enums, Spatial and Better Performance on .NET 4.0 - By moving the core components that used to be in the .NET Framework into the EF NuGet package we are now able to offer enum support, spatial data types and the performance improvements from EF5 on .NET 4.0.
        What's after Alpha 1

        If you want to try out the changes we've made since the last official pre-release, you can use the latest signed nightly build. You can also check out the Feature Specifications page on CodePlex for more information about new features that we are working on. You can also read our Design Meeting Notes to keep track of the evolving design of new features.


        Beth Massi (@bethmassi) announced on 10/29/2012 that she’s Speaking in San Diego this November:

        imageIn November I’ll be speaking down in sunny (hopefully) San Diego! On November 6th I’ll be speaking to the San Diego .NET Developers Group and then later I’ll be the guest star at the .NET Rocks Visual Studio 2012 Launch Road Trip on November 27th. If you’re in the area, come on out! Here are the details of each of these events…

        image_thumb6San Diego .NET Developers Group
        Nov 6th @ 6pm - 12400 High Bluff Drive, San Diego, CA

        Building Connected Business Applications in Light Speed
        Visual Studio LightSwitch is the easiest way to create modern line of business applications for the enterprise. In this session you will learn how LightSwitch helps you focus your time on what makes your application unique, allowing you to easily implement common business application scenarios—such as integrating multiple data sources, data validation, authentication, and access control. See how LightSwitch in Visual Studio 2012 has embraced OData making it easy to consume as well as create interoperable data services. Then see how LightSwitch makes it easy to deploy these services to the Azure cloud and consume them from other client applications and platforms. You will also see how the LightSwitch team is enabling mobile scenarios making it easy to create HTML5/JavaScript companion clients for modern mobile devices.

        REGISTER HERE

        Visual Studio 2012 Launch Road Trip
        Nov 27th @ 6pm – 16620 West Bernardo Drive, San Diego, CA

        The .NET Rocks! Visual Studio 2012 Launch Road Trip! (Building Modern Apps with a Modern Lifecycle - Sep 19 to Dec 2)
        Carl Franklin & Richard Campbell rented a big 37' RV and have been travelling all over the US & Canada for the launch of Visual Studio 2012. At each stop they record a live .NET Rocks! show with a guest star. Following that, they each do a presentation around building modern applications on the Windows platform. There will be food, drink, geeking out, and a lot of good times. You don’t want to miss one of these shows!

        REGISTER HERE

        Hope you can make it!


        Return to section navigation list>

        Windows Azure Infrastructure and DevOps

        ‡‡ David Linthicum (@davidlinthicum) asserted “Microsoft is making real progress in move to cloud computing, but it's not leading -- while Amazon, Apple, and Google are” in a deck for his Microsoft needs to step up its cloud game article of 11/2/2012 for InfoWorld’s Cloud Computing blog:

        imageAs InfoWorld's Ted Samson pointed out this week, Microsoft is intertwining Windows 8, Windows Phone 8, and Windows Azure to help developers build multiplatform, cloud-friendly apps. It announced a batch of new services and functionality for its cloud platform, including the extension of Windows Azure Mobile Services to support for Windows Phone 8.

        imageThe move should provide easy cloud-based mobile application development for those loyal to Microsoft platforms. Microsoft even provides a store to sell these Azure-built applications. But something's still missing from Microsoft's cloud picture.

        image222The core issue is that a company the size of Microsoft should be doing more leading and less following. All the technologies and services Microsoft announced this week for its cloud ecosystem were born, proven, and executed by other companies, such as Apple, Google, and Amazon.com. Microsoft is looking much less innovative than these competitors, so it's much less likely to capture and hold the emerging $50 billion cloud computing market.

        I have a few pieces of advice to my friends in Redmond:

        First, stop adopting the strategies of other successful cloud companies. Microsoft is hardly alone in having a WAID (whatever Amazon is doing) approach -- Oracle and Hewlett-Packard are also cloud WAIDers. But to break out from the me-too pack, Microsoft needs to find its own path, including new cloud services that drive both developers and users back to Microsoft.

        Second, open up a bit. The big issue with Microsoft is that its products are largely closed systems. Those companies that are moving into cloud computing have lock-in nightmares, which is slowing them down and paralyzing many others. Cloud providers need to figure out how to address that lock-in issue up front. I'm waiting impatiently for Microsoft to jump feet-first into open cloud standards.

        Third, leverage your desktop dominance. Although Azure is designed to work on Microsoft's desktop platforms and software, actual integration with its Windows and Office ecosystem seems to be an afterthought. Seamless yet loosely coupled integration of the cloud and the desktop is something that Microsoft can -- and should -- do better.

        Microsoft has some real potential in cloud computing. If only it would commit to being a star player, not a minor leaguer.

        IMO, Google’s playing catch-up with its Google Compute Engine.


        ‡‡ Glenn Block (@gblock) reported New Powershell command line goodies for Windows Azure at #bldwin on 11/3/2012:

        imageWe just shipped the latest update for our Powershell cmdlets for Windows Azure. It is literally packed with new and useful features which you’ve been asking for! Below is a quick summary of what is new followed by more detail.

        • Windows Azure Websites (help website) – Create, delete, configure and manage your Windows Azure Websites. This includes the ability to do things like CRUD management of connection strings and app settings, setting number of workers and much more. On the management side you can drill into your deployments, do rollbacks or download the logs.
        • Configuring Github deployment – This relates to Websites, but it’s too cool to bury it in the previous bullet. You can now connect your Website to Github all via the command line. This works really seamlessly if you are in an existing local git repo as we extract the info from your remotes. All of this without ever having to open a browser, w00t!
        • SQL Database (help sql) – Create and manage SQL Server instances, firewall rules, etc in the cloud!
        • Caching (help azurecache) – For PHP and Node developers you can scaffold a cache worker role which uses the new dedicated Azure cache. You can then enable your PHP/Node app running in a web role to talk to the cache using the Memcached driver!

        image222If you are like me and just want to jump in, don’t wait, just go grab the bits now. And if you are like my friend @ayende and won’t use tools until you read the code, no problem, you can git that here and step through line by line :-)

        Windows Azure Websites

        Note: Example apps below are all node.js apps, but you can use Websites for .NET, Node and PHP.

        We’ve introduced a comprehensive set of commands for working with your Websites. Just type “help website” and you’ll see.

        image

        Creating websites

        Below you can see that how I create my website using the New-AzureWebsite cmdlet and enable git based deployment. I then push to my repo and see as expected that because this is a node app, npm kicks off to download my node.js modules.

        image

        Github based deployment

        In our last release of Windows Azure Websites we enabled you to wire up your Website to Github via the portal so that each time you push to Github, you push to your Website. Well now you can do the same via Powershell! Below you can see creating the same website only this time I specified to use Github which prompts me for my Github credentials.

        image

        After entering my credentials, Github is wired up. I then can use the Get-AzureDeployment cmdlet to check if my app has been deployed.

        image

        Note: With this release if your remote is using an SSH url, we will prompt you with a list of repos for you to select from. This is a point in time behavior will be fixed shortly to work the same way as we do for https remotes.

        Browsing

        Once I have a website created, I can use the convenient Show-AzureWebsite cmdlet.

        image

        And voila, my site will open in the browser.

        image

        Site management

        The cmdlets offer you a lot of rich options for configuring your websites, and you can pipe them together for scripting. For example below I am retrieving the list of websites using Get-AzureWebsite and piping the resulting objects into the Remove-AzureWebsite cmdlet which removes all my websites.

        image

        App Settings

        One of the really nice features in the Websites portal experience is you can configure “App Settings”. For node developers this information is then surface as environment level variables which can be accessed within you application. It’s very useful for either secure information or environment specific settings. For example imagine I deploy the same site code to a staging website and a production website, I can use settings for storing the DB connection information so that each website connects to the correct server. With this release you can now do this via the CLI!

        Below you can see a screenshot showing how I am configuring my app to have a “message” setting.

        image

        I have modified my express app to then pull process.env.message for the main text:

        image

        I can then retrieve the settings and modify them piping back the results directly using Set-AzureWebsite. The changes are instant.

        image

        image

        SQL Database

        If you are deploying apps to the cloud then most likely you need a database. In Windows Azure you have many options for hosted databases at your disposal including SQL Server, MySQL and now MongoDB. In this release we’re happy to announce that we’re shipping cmdlets for managing Azure hosted SQL Databases. Just type “help sql” and you’ll see the cmdlets.

        image

        With the new cmdlets it is really easy to provision a new SQL server, add firewall rules and create databases

        Azure Cache role for node and PHP!

        One of the big requests we’ve heard from community was to support using the new Azure Cache from node.js and PHP applications. With this release we’ve now introduced a new set of cmdets for doing just that. You can create a dedicated cache in Worker Role and then wire up Web roles to communicate with that cache using the Memcached protocol. Below you can see how easy it is.

        image

        One the cache is wired up, you can then use either the node or PHP memcached driver to talk to it. When you connect to the client, you use the convention ‘localhost_[webrole]’ i.e. ‘localhost_WebRole1’ for the previous example to talk to the cache. Below is a snippet of how to do this in node using the mc module.

        var mc = require("mc");

        var mcclient = new mc.Client('localhost_WebRole1');

        mcclient.connect(function() {

        console.log("Connected to the localhost memcache on port 11211!");

        });

         

        view rawgistfile1.jsThis Gist brought to you by GitHub.

        <link rel="stylesheet" href="https://gist.github.com/stylesheets/gist/embed.css"><div id="gist-4008244" class="gist"> <div class="gist-file"> <div class="gist-data gist-syntax"> <div class="highlight"><pre><div class='line' id='LC1'><span class="kd">var</span> <span class="nx">mc</span> <span class="o">=</span> <span class="nx">require</span><span class="p">(</span><span class="s2">"mc"</span><span class="p">);</span></div><div class='line' id='LC2'><span class="kd">var</span> <span class="nx">mcclient</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">mc</span><span class="p">.</span><span class="nx">Client</span><span class="p">(</span><span class="s1">'localhost_WebRole1'</span><span class="p">);</span></div><div class='line' id='LC3'><span class="nx">mcclient</span><span class="p">.</span><span class="nx">connect</span><span class="p">(</span><span class="kd">function</span><span class="p">()</span> <span class="p">{</span></div><div class='line' id='LC4'> <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">"Connected to the localhost memcache on port 11211!"</span><span class="p">);</span></div><div class='line' id='LC5'><span class="p">});</span></div><div class='line' id='LC6'><br/></div></pre></div> </div> <div class="gist-meta"> <a href="https://gist.github.com/raw/4008244/e82c1affe4b6b211c14d7184815a02272834b3eb/gistfile1.js" style="float:right;">view raw</a> <a href="https://gist.github.com/4008244#file_gistfile1.js" style="float:right;margin-right:10px;color:#666">gistfile1.js</a> <a href="https://gist.github.com/4008244">This Gist</a> brought to you by <a href="http://github.com">GitHub</a>. </div> </div>< /div>

        And here is how to do it in PHP.

        $memcache = new Memcache;

        $memcache->connect('localhost_WebRole1', 11211) or die ("Could not connect");

         

        view rawgistfile1.phpThis Gist brought to you by GitHub.

        <link rel="stylesheet" href="https://gist.github.com/stylesheets/gist/embed.css"><div id="gist-4008327" class="gist"> <div class="gist-file"> <div class="gist-data gist-syntax"> <div class="highlight"><pre><div class='line' id='LC1'><span class="nv">$memcache</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">Memcache</span><span class="p">;</span></div><div class='line' id='LC2'><span class="nv">$memcache</span><span class="o">-></span><span class="na">connect</span><span class="p">(</span><span class="s1">'localhost_WebRole1'</span><span class="p">,</span> <span class="mi">11211</span><span class="p">)</span> <span class="k">or</span> <span class="k">die</span> <span class="p">(</span><span class="s2">"Could not connect"</span><span class="p">);</span></div><div class='line' id='LC3'><br/></div></pre></div> </div> <div class="gist-meta"> <a href="https://gist.github.com/raw/4008327/5e85bb6c5e491728b120416c18e1ca5cf903272e/gistfile1.php" style="float:right;">view raw</a> <a href="https://gist.github.com/4008327#file_gistfile1.php" style="float:right;margin-right:10px;color:#666">gistfile1.php</a> <a href="https://gist.github.com/4008327">This Gist</a> brought to you by <a href="http://github.com">GitHub</a>. </div> </div>< /div>

        Using the new distributed cache can be a big help for applications that store large amounts of either transient / or lookup data. Currently we only support the dedicated cache, but we are looking to also support the shared in role cache. We could use your feedback though so let us know which models you use (distributed vs shared cache)

        Many more goodies to come

        This is just the beginning of what will be a wave of new Powershell cmdlets for Azure, things like ServiceBus, Azure Store and much much more.

        Go get started by downloading them now

        Let us know how you like the new cmdlets and if there are specific areas you’d like to see us invest in.

        Spoiler alert: We also just updated our npm package where you will find a bunch of this functionality also in our cross plat cli. Stay tuned as that post is next.

        $havefun = Download_the_bits()


         •• James Staten (@staten7) asserted You’re Running Out Of Excuses To Not Try Microsoft Windows Azure in a 10/31/2012 post to his Forrester Research blog:

        imageIf you have dismissed Microsoft as a cloud platform player up to now, you might want to rethink that notion. With the latest release of Windows Azure here at Build, Microsoft’s premier developer shindig, this cloud service has become a serious contender for the top spot in cloud platforms. And all the old excuses that may have kept you away are quickly being eliminated.

        image222In typical Microsoft fashion, the Redmond, Washington giant is attacking the cloud platform market with a competitive furor that can only be described as faster follower. In 2008, Microsoft quickly saw the disruptive change that Amazon Web Services (AWS) represented and accelerated its own lab project centered around delivering Windows as a cloud platform. Version 1.0 of Azure was decidedly different and immature and thus struggled to establish its place in the market. But with each iteration, Microsoft has expanded Azure’s applicability, appeal, and maturity. And the pace of change for Windows Azure has accelerated dramatically under the new leadership of Satya Nadella. He came over from the consumer Internet services side of Microsoft, where new features and capabilities are normally released every two weeks — not every two years, as had been the norm in the server and tools business prior to his arrival.

        imageMicrosoft’s initial effort at differentiating Windows Azure centered around offering platform-as-a-service (PaaS) versus the raw virtual infrastructure-as-a-service (IaaS) play of AWS’s EC2. But this move, on its own, looked too much like lock-in. Over the years it has expanded the development frameworks, languages, and runtimes it supported and finally relented on IaaS this summer. It is now at the point where it can fulfill its claim that Windows Azure is Windows in the cloud. Frankly, if you can build it in Visual Studio and/or deploy it to Windows or to a virtual machine (yes, that means Linux), then it can run on Windows Azure. That doesn’t mean it should run in the cloud — that’s a totally different story — but the excuse that a workload couldn’t run on Azure seems, for the most part, to be gone.

        On the cost front, Microsoft has been diligently following AWS pricing, although not in lockstep (that would make cost comparisons too easy). AWS had one up on them most of this year with AWS Reserved Instances, a discounting mechanism for EC2 that has quickly become the secret sauce for ensuring IaaS always trumps hosting on price. Well Microsoft, one-upped AWS this month when it released its Azure Commitment pricing scheme. Think of it as the cell phone minutes roll-over plan mixed with an all-you-can-eat plan. Where Reserved Instances gives you discounts on specific types of compute instances only in the EC2 service, Azure Commitment gives you a blanket discount on resource hours across all Azure services, including SQL Server and storage hours, plus higher level services like Azure Media Services and their MBaaS. And the plan can be purchased much more flexibly than Reserved Instances. This makes cloud consumption planning much easier and much more forgiving than the AWS plan (although the AWS Reserved Instance Marketplace is a bit of a buffer for planning errors on this platform).

        If you have been waiting for your peers to use Windows Azure, that excuse is going away too. Microsoft said it now has over 100,000 separate named accounts using Azure, is adding thousands per month, and compute use is doubling every six months. If you are a government agency, an airline, a pharmaceutical maker, media company, a software-as-a-service (SaaS) provider, an oil company, or a financial services firm, your peers are already using Windows Azure. And our surveys of developers, development managers, and even I&O pros show Microsoft as a top-five most used cloud platform.

        But you miss a lot of Window Azure’s value if you just look at it through the lens of AWS. Where Microsoft is differentiating itself centers around its integration with the on-premises world. From the beginning Microsoft saw Azure as an extension of Visual Studio and they were right with this approach as developers were pioneering cloud platforms, and they certainly had a captive audience. But the company has now integrated Azure with its Team Foundation Server that links teams of Visual Studio developers and applies life-cycle management to a normally chaotic, creative process. Now dev managers can control what goes up to the cloud, when, and for what purpose and can use Azure as a collaboration platform for diverse development teams. Earlier this year, Microsoft did the same thing for infrastructure & operations managers when it integrated System Center with Azure. Now you can deploy workloads to Azure, and IT Ops can monitor and maintain them using tools they already trust. And now that Windows Azure IaaS runs the same underlying OS (Windows Server 2012) and hypervisor (Hyper-V), you can suddenly, with confidence, have a complete hybrid cloud strategy using assets you already are using that share the same foundation and are designed to work together. And Microsoft’s acquisition of StorSimple should play a part in this story. Yes, I know your “private cloud” isn’t based on Hyper-V, but it isn’t really a cloud either — so that opportunity still lies ahead for both you and Microsoft.

        Where Microsoft needs to work the hardest now is in evangelizing Windows Azure to its vast ecosystem of software providers who build applications for on-premises Windows server and client. If it wants to have any hope of competing toe-to-toe with AWS, it needs to bring as many of these partners over to Azure as it possibly can. This doesn’t mean convincing them to migrate their applications to Azure, rather it means helping them incorporate Azure into their business models. For client and server applications, this means App Internet integrations, where code in the cloud enhances the on-premises experience. For mobile developers, it means using Azure as their mobile back end. For some it will be transitioning their businesses to SaaS or exposing their software as a cloud-based service, but for many this wholesale move just won’t compute.

        Of course, Microsoft should also appeal to the new generation of startups that are building fresh capabilities in the cloud, and its latest moves should make Windows Azure an attractive platform. But companies that build software for Windows today far outnumber the startup community, and they collectively represent a market opportunity cloud companies could only dream of tapping near term.

        Both ISV audiences will benefit from the revamped Windows Store that makes it easier for ISVs to showcase their wares to all appropriate audiences in the proper context without having to work their way into separate stores for Windows desktop, server, mobile, and Azure. Developers can add app services like New Relic and AppDynamics, and data sets such as historical hurricane data from Weather Trends International, to their Windows Azure projects the same way they would add any native Windows Azure resource. Purchases can be made directly through the Windows Azure Management Portal. Good job.

        There’s certainly a lot of work ahead for Microsoft to make Windows Azure a sustained leader in the cloud platform space. We’re only getting started in enterprise adoption of cloud platforms, after all. But the company’s hard work in making Windows Azure a credible place to put your custom applications makes them a credible first-tier competitor.


        Avkash Chauhan (@avkashchauhan) described Using Windows Server 2012 OS with Windows Azure Cloud Services and .net 4.5 in a 10/29/2012 post:

        imageWith latest release of Windows Azure, Cloud service user can run their cloud service application in Windows Server 2012 OS. If you have seen the new portal you may have seen the following new addition:

        Currently this settings is available only for new application and any of your Windows Server 2008 SP* or Windows Server 2008 R2 based application cannot be migrated to Windows Server 2012. So if you application is running with OS Family 1 (Windows Server 2008 SP*) or 2 (Windows Server 2008 R2) you cannot migrate to Windows Server 2012.

        As you may have already seen that SDK 1.8 supports .net 4.5 if you want choose .net 4.5, you must use OS family 3. OS Family 1 and 2 does not support for .net 4.5 based application. Also OS Family 3 will only support .NET 4.0 and higher.

        image222<ServiceConfiguration serviceName="WindowsAzure2012" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="3" osVersion="*" schemaVersion="2012-10.1.8">

        If you have an existing .NET 3.5 application, which will be only supported on OS Family 1 and 2.

        Welcome back to blogging, Avkash.


        Avkash Chauhan (@avkashchauhan) posted a List of components installed with Windows Azure SDK 1.8 on 10/29/2012:

        imageWindows Azure SDK 1.8 is live and available to work with VS2012 and VS2010SP1. You can choose any one WebPI 4.0 based installer to install from here.

        Direct Download Link: http://www.microsoft.com/en-us/download/details.aspx?id=35448

        image222Following is the list of all components installed with SDK 1.8:

        By default the SDK is installed here:

        C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\2012-10

        You can verify the different components installed as below:


        <Return to section navigation list>

        Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

        ‡ Jason Verge (@jasonverge) reported Microsoft Brings Windows Azure Cloud to China in an 11/2/2012 post to The Data Center Knowledge blog:

        imageThe Windows Azure public cloud is coming to China. Microsoft has signed a Memorandum of Understanding (MOU) with the municipality of Shanghai, and signed agreement to license Microsoft technologies to 21Vianet, who will offer Azure in China out of local data centers. Customers can also use Office 365 and Windows Azure directly from Microsoft data centers in Singapore and Hong Kong.

        image22221Vianet will also be offering Office365, the SaaS Office offering. The Shanghai government also announced it will adopt both Azure and Office365 from 21Vianet, once they’re available. This is a major part of Microsoft delivering their cloudOS vision to China, which aims to deliver multi-tenant public and private cloud services to millions of businesses in China.

        Microsoft first announced it was expanding into China in September. The company has been increasing its investment in the country through new hires, researching local requirements and a general expansion push. This included the hiring of 1,000 additional employees over the year, as well as bumping up the R&D investment in China 15% from the current half a billion. In terms of geographical expansion, its moving into 15 provinces and 20 cities. This news also follows the recent launch of windows server 2012. Microsoft’s vision is to offer cloud solutions across on-premises, owned data center or using Windows Azure public cloud.

        So with quite a few international announcements this week, it’s now time to ask: Why China? Simply, it’s a massive market with tons of upside.

        Companies and local governments are looking to cloud to improve productivity. China is the world’s largest market, with a lot of room to grow. A recent Forrester Research report on cloud computing (Forrester Research’s “Sizing The Cloud Markets In Asia Pacific” released Feb. 3, 2012) found that the public cloud market in China will grow from $297 million in 2011 to $3.8 billion in 2020.

        Microsoft is striking early, as there is no clear cloud leader in China as of yet. Amazon Web Services, the chief competition, offers EC2 in Tokyo and Singapore, as well as edge locations in Osaka, Hong Kong, Singapore, and Sydney. But for the most part, China is a market yet fully untapped by anyone.

        But like the Telehouse Russia announcement, China is a huge market that is not without its potential issues. Several technology companies have had a hard time with Chinese expansion plans. Google has had its tiffs with China, including Google rerouting searches through Hong Kong at one point, Go Daddy stopping domain name registration, and other problems with Chinese censorship and high piracy rates (which don’t really affect public cloud, but still) in general across the tech landscape.

        Microsoft’s cloudOS strategy and vision is a combo of Azure and Windows Server 2012 to form a platform to build regardless of preferred language, tool or frameworks. It’s been adding support for PHP, Node.js, and other languages in support of this vision. The company added new features to Azure this week, including a distributed cache feature which has drawn a lot of interest.

        In other news, Microsoft also announced that Azure will be supporting Halo 4’s launch and multiplayer experience, with millions of concurrent players expected, Satya Nadella, president of Servers and Tools business, said on his blog.


        <Return to section navigation list>

        Cloud Security and Governance

        image_thumb2No significant articles today


        <Return to section navigation list>

        Cloud Computing Events

        •• Ted Samson (@tsamson_IW) asserted “Microsoft is intertwining Windows 8, Windows Phone 8, and Windows Azure to help developers build multiplatform, cloud-friendly apps” in a deck for his Build 2012: Microsoft extends Azure cloud OS to Windows Phone 8 article of 10/31/2012 for InfoWorld’s Developer_World blog:

        imageWindows Azure was the topic du jour this morning at day two of Microsoft Build 2012 in Redmond, Wash., as Microsoft announced a batch of new services and functionality for its cloud platform, including the extension of Windows Azure Mobile Services to include support for Windows Phone 8, in addition to Windows 8. On top of that, Microsoft announced broader language support for Azure, a new Windows Azure Store, and the availability of Visual Studio Team Foundation Service.

        imageKeynoter Satya Nadella, president of Microsoft's server and tools business, also used his stage time to reiterate one of Build's underlying themes: That Microsoft is going to great lengths to tightly intertwine Windows 8, Windows Phone 8, and Windows Azure in the spirit of making it easier for developers to build multiplatform, cloud-friendly apps with minimal code that can run on all Windows form factors, including PC, laptop, tablet, and smartphone.

        image222"Our focus is to make sure we're creating an experience that makes it easy for developers to do what they do best: build apps," Nadella wrote in a blog post complementing his keynote. "The unique value we're delivering with Windows Azure is the power to easily create and scale applications though the power of platform services that enable a variety of device experiences, social, and Web-based applications."

        imageAmong Windows Azure's function is to serve as the backend engine that makes it possible to provide push notifications on a single framework that supports multiple formats, he said, while allowing the apps to be distributed through the Windows Store. As an example, he cited USA Today's Windows Phone 8 app, which relies on Azure for pushing headlines to users. On the enterprise app side, Azure Active Directory can be integrating with Office 365 for single sign-on, ensuring that users can securely access their data and services.

        Noting the piping-hotness of the gaming market, Microsoft has spent ample time wooing game developers at Build. As an example, Nadella talked up the soon-to-be-released Halo 4 for Xbox 360 as a showcase for Azure's flexibility. According to Nadella, Azure is the underlying architecture for the game's multi-player functionality, capable of supporting 2 million-plus concurrent players at once. "With the flexible and on-demand architecture of Windows Azure, each Halo 4 developer had their own development environment, which allowed development and testing to run in parallel," he added.

        Nadella said that Microsoft's goal is for Windows Azure is the most complete platform to build on, such that developers can build apps using their preferred language, tool, or framework. To that end, Microsoft also announced updates to Azure including website language support for .Net framework and Python. Azure also handles node.js, PHP, and Java.

        The newly unveiled Windows Azure Store offers users a place to find, purchase, and provision "premium services" for cloud applications. The store is currently in preview mode and open only to United States residents. App services available in the store today include ClearDB on Windows Azure, which enables developers to build apps using native MySQL databases; MongoLab, touted as a "full-featured MongoDB cloud database solution" designed to automate the operational aspects of running MongoDB; and SendGrid, a cloud-based email infrastructure in the cloud. Data services, meanwhile, include the Bing Search API, which enables developers to embed and customize search results in applications or websites using XML or JSON; and Melissa Data's IP Check, designed to identify an Internet user's geographical information, including country, region, city, latitude and longitude, ZIP Code, ISP, and domain name.

        Finally, Microsoft announced that Visual Studio Team Foundation Service, unveiled in preview mode at last year's Build, is now available for full production use. It's a version of Team Foundation Server hosted on Windows Azure, aimed at making it easier for developers to get started with the ALM (application lifecycle management) platform. The service targets an array of platforms, supporting not only Visual Studio, Excel, and Project, but also development for Mac OS, Linux, Solaris, and HP-UX.


        • Brady Gaster (@bradygaster) requested bloggers to promote A Windows Azure Community Event to be held online via Channel9 (@Ch9) on 11/14/2012 from 8:00 AM to 5:00 PM PST:

        image

        imageOn November 14, 2012, Microsoft will be hosting Windows AzureConf, a free event for the Windows Azure community. This event will feature a keynote presentation by Scott Guthrie, along with numerous sessions executed by Windows Azure community members. Streamed live for an online audience on Channel 9, the event will allow you to see how developers just like you are using Windows Azure to develop applications on the best cloud platform in the industry. Community members from all over the world will join Scott in the Channel 9 studios to present their own inventions and experiences. Whether you’re just learning Windows Azure or you've already achieved success on the platform, you won’t want to miss this special event.

        Click here for the registration form.image222

        Links and Resources

        Windows Azure Training Kit
        @WindowsAzure on Twitter
        Windows Azure on Facebook
        Subscribe to the Windows Azure Newsletter

        If you haven't already discovered the best cloud computing platform in the industry, sign up for a free Windows Azure account here. In under 5 minutes you'll have access to unlimited resources.

        “Unlimited resources” is a bit of a stretch, unless you have unlimited funds.


        • 10gen invited Windows Azure developers to Join Over 1,000 MongoDB Community Members at MongoSV in a 10/30/2012 email message:

        MongoSV
        December 4th, 2012
        Santa Clara Convention Center
        5001 Great America Parkway
        Santa Clara, CA 95054

        Learn More

        Register Now

        On Tuesday, December 4th, over 1,000 MongoDB users and contributors will converge in Santa Clara for the largest MongoDB conference of the year. MongoSV is 10gen's annual user conference in Silicon Valley, and in 2012 we will be hosting over 40 sessions on MongoDB led by production users, 10gen engineers, and ecosystem partners.

        In this multi-track conference, there will be a wide variety of sessions to meet the needs of developers and administrators at all levels of expertise. From learning the details of a production use case to discovering new techniques for scaling your infrastructure you will walk away with immediately applicable skills.

        Register by November 2nd to take advantage of early bird pricing that is only $50. Hurry, these early bird tickets have limited availability.

         
        MongoSV Workshops December 3rd

        We are offering two in-depth, hands-on MongoDB workshops the day before MongoSV: Schema Design and Architecture, and Operations Hands-On. Register today and receive a complimentary ticket to MongoSV.

         

        Follow Us

        Twitter

        facebook

        LinkedIn

         

        10gen develops MongoDB and provides consulting, support, and training. We're here to help - contact us with questions!


        • Jeff Price reported that High-Profile Start-ups Discuss Why They Chose Azure and How Azure Benefits Them will be the topic for the San Francisco Bay Area Azure Developers group meeting on 11/5/2012 at Microsoft San Francisco:

        imagePlease join us as three high-profile start-ups discuss why they chose Windows Azure and how it benefits their businesses:

        image222Getable, Inc. was originally built in 2010 with Windows Azure in mind. Our sites and mobile APIs are running on MVC3. We are using Azure Compute, SQL Azure for all relational data and a significant amount of data is stored in Azure Table Service. All our product and business images are stored with the Azure Blob service. We have recently implemented TFS Preview for all our CI/CD needs and completely love the tight integration between Azure and TFS Preview. Getable is looking into SQL Azure Federation at the moment. We are also interested in the new Mobile Services to evaluate our additional mobile apps planned in our roadmap. Last, we are hiring! Getable is building a pool of candidates for Data Architecture, cross-functional MVC and mobile developers (mainly iOS). Presenter is Ludo Goarin.

        Kaggle Inc. is the leading platform for predictive modeling competitions. Companies, governments and researchers present datasets and problems - the world's best data scientists then compete to produce the best solutions. We use Azure to host our competition infrastructure, and to productionalize the winning models from our competitions through the Kaggle Engine service. Kaggle CTO, Jeff Moser, recently said the following about Windows Azure: “I didn't want to futz with managing the IT needs of our servers so Azure's Platform as a Service (PaaS) offering was very attractive. I was very familiar with Microsoft's tools already [.NET development platform and its accompanying C# programming language] and Azure was easy to grasp given that background.” Jeremy Howard, Kaggle’s Chief Scientist, was recently interviewed by Wired Magazine about Azure "the most misunderstood cloud" http://www.wired.com/wiredenterprise/2012/04/microsoft-azure-java/ Presenter is Jeremy Howard.

        We Compete, Inc. is an early stage startup that provides a place on the web and a set of mobile apps that seek to enhance athletic competitive experiences. We do it via easy, intuitive competitive event creation and an opportunity for everyone to add and view multimedia content as event unfolds. However, our biggest advantage is that we always put competitors front and center - just wait and see for yourself at http://www.we-compete.com. We chose Azure because we believe in the leadership team heading up the Azure efforts. We are confident that this well-integrated, comprehensive and scalable cloud offering provides a solid foundation for our business. Another reason is that we have several decades of collective development experiences on the Microsoft platform, which we are able to utilize developing for Azure. There are several other reasons as well that will be covered during the presentation. Presenter is Eugene Chuvyrov.

        After 6:00 p.m., please enter through the main entrance on the 1st Floor and the security guard will provide elevator access to the 7th Floor.

        Location:

        Microsoft San Francisco (in Westfield Mall where Powell meets Market Street)
        835 Market Street
        Golden Gate Rooms - 7th Floor
        San Francisco, CA (map)


        My (@rogerjenn) Windows Azure Sessions at //BUILD/ 2012 - Quick Reference Guide lists the 24 sessions with Windows Azure as a topic in date/time order:

        image222The following list of 24 sessions was extracted from Channel9’s Build 2012 article of 10/30/2012 filtered by Windows Azure:

        • imageBuilding data centric applications for web, desktop and mobile with Entity Framework 5.

          • Rowan Miller
          • October 30, 2012 from 2:15PM to 3:15PM
          • Never tried Entity Framework before? Or long term Entity Framework developer? Come learn how Entity Framework 5 makes it very simple to keep both your code and database in sync as you make changes using Code First and Migrations. Plus learn about many other enhancements including Designer improvemen...
        • Connecting C++ Apps to the Cloud via Casablanca

          • Niklas Gustafsson, Artur Laksberg
          • October 30, 2012 from 5:45PM to 6:45PM
          • In this presentation, we will introduce you to Casablanca, a Microsoft incubation effort to explore how to best support C++ developers who need to take advantage of the radical shift in software architecture that cloud computing represents. With Casablanca, C++ developers get modern ...
        • Windows Azure Overview

          • Scott Guthrie
          • October 31, 2012 from 11:15AM to 12:15PM
          • Windows Azure is a flexible and open cloud platform for a wide variety of applications ranging from web sites to enterprise and mobile applications. In this session Scott Guthrie will demonstrate how to quickly build and deploy applications using the new Windows Azure features and services including...
        • Introduction to Windows Azure Infrastructure as a Service (IaaS)

          • Mark Russinovich
          • October 31, 2012 from 1:45PM to 2:45PM
          • Join Mark Russinovich for a tour of the features that make up the Windows Azure Virtual Machines and Virtual Networks offerings, which collectively make up Windows Azure’s Infrastructure as a Service (IaaS) support. Using demonstrations throughout, he explains the Virtual Machine storage architectur...
        • Building Rich Media Applications on Windows 8 with Windows Azure Media Services

          • Mingfei Yan
          • October 31, 2012 from 1:45PM to 2:45PM
          • In this session we will provide an overview of the latest release of Windows Azure Media Services. With this set of video services built on top of Windows Azure, you can create and deliver rich media with a high-quality viewing experience on a global scale, to various platform and devices. We will i...
        • Advanced Windows Azure Infrastructure as a Service (IaaS)

          • Michael Washam
          • October 31, 2012 from 3:30PM to 4:30PM
          • Learn from a developers perspective how to use Windows Azure Virtual Machines to run your workload in the cloud. You will see how to automate virtual machines with the service management API and with tools from the Windows Azure SDK, PowerShell and the cross platform command line tools. Additionally...
        • Building Big: Lessons learned from Windows Azure customers - Part I

          • Simon Davies, Mark Simms
          • October 31, 2012 from 3:30PM to 4:30PM
          • Millions of requests per day. Global coverage. Rapid feature deployments. Zero down time. These are the requirements of Windows Azure’s top customers. Using key Windows Azure features, such as compute, cache, CDN and traffic manager, you can quickly build services that meet the most demanding of wor...
        • Developing Mobile Solutions with Windows Azure Part I

          • Josh Twist
          • October 31, 2012 from 3:30PM to 4:30PM
          • Join us for a session packed with live coding as the presenter builds a Windows 8 application and brings it to life with the connected power of Windows Azure Mobile Services. We’ll look at how easy it is to add authentication, secure structured storage and even send push notifications to update live...
        • Building Big: Lessons learned from Windows Azure customers - Part II

          • Simon Davies, Mark Simms
          • October 31, 2012 from 5:15PM to 6:15PM
          • Millions of requests per day. Global coverage. Rapid feature deployments. Zero down time. These are the requirements of Windows Azure’s top customers. Using key Windows Azure features, such as compute, cache, CDN and traffic manager, you can quickly build services that meet the most demanding of wor...
        • Developing Mobile Solutions with Windows Azure Part II

          • Chris Risner
          • October 31, 2012 from 5:15PM to 6:15PM
          • Now that you know about Windows Azure Mobile Services join us for this demo packed session to learn how to take your Windows Store and Windows Phone 8 apps to the next level. Learn how to extend your existing applications to support common scenarios such as geo-location, media, and cloud to device m...
        • Building end-to-end apps for SharePoint with Windows Azure and Windows 8

          • Rob Howard, Donovan Follette
          • November 1, 2012 from 8:30AM to 9:30AM
          • With the deep SharePoint 2013 API set, coupled with the new app models for SharePoint and Office, the opportunity to build innovative end-to-end solutions that span cloud services and devices is just plain breathtaking. Devices can seamlessly reach into SharePoint via REST to retrieve data and Share...
        • Windows Azure Active Directory: enabling single sign on and directory services for cloud SaaS apps

          • Vittorio Bertocci
          • November 1, 2012 from 10:15AM to 11:15AM
          • Active Directory enabled generations of developers to focus on their business applications features rather than worrying about identity management. Windows Azure Active Directory is Active Directory reimagined for the cloud, designed to solve for you the new identity and access challenges that come ...

        • Continuous Integration with Windows Azure Websites

          • Johnny Halife, Justin Beckwith
          • November 1, 2012 from 10:15AM to 11:15AM
          • Windows Azure enables developers to use a variety of workflows to automatically deploy code from the tools you’re already using, like TFS, CodePlex, and GitHub. This talk will focus on the various ways to deploy your projects to Windows Azure Web Sites, including git deployment, TFS deployment, cont...

        • Windows Azure Internals

          • Mark Russinovich
          • Mark Russinovich goes under the hood of Microsoft’s cloud OS, Windows Azure. Intended for developers who have already gotten their hands dirty with Windows Azure and understand its basic concepts, this session gives an inside look at the architectural design of Windows Azure’s compute platform. Lear...
        • Developing Big Data Analytics Applications with JavaScript and .NET for Windows Azure and Windows

          • Matt Winkler
          • November 1, 2012 from 2:30PM to 3:30PM
          • In this session we will discuss key aspects of using non-JVM languages in the Hadoop environment. First, we will show how we can reach to a much broader set of developers by enabling JavaScript support on Hadoop. The JavaScript API lets developers define Hadoop jobs in a style that is much more natu...

        • Getting Started with Cloud Services Development

          • Paul Yuknewicz
          • November 1, 2012 from 4:15PM to 5:15PM
          • Come to this session to learn how to create Platform-as-a-Service style (PaaS) cloud services in Windows Azure. See how to have simplified application deployment and configuration, high availability and scale and see how the platform can take care of administrative tasks such as OS patching and mach...

        • Javascript from client to cloud with Windows 8, Node.js, and Windows Azure

          • Nathan Totten
          • November 1, 2012 from 4:15PM to 5:15PM
          • We are currently experience an exciting shift for JavaScript developers. For the first time, the Node.js and WinRT platforms along with modern browsers enable developers to write end-to-end applications in a single language that run on virtually any device. In this talk you will learn the fundamenta...
        • Windows 8 Connectathon with Windows Azure Mobile Services

          • Josh Twist
          • November 1, 2012 from 4:15PM to 5:15PM
          • Join us for a session packed with live coding, as Josh Twist builds a Windows 8 application and brings it to life with the connected power of Windows Azure Mobile Services. We’ll look at how easy it is to add authentication, secure structured storage and even send push notifications to update live t...

        • Advanced Cloud Services Development

          • Haishi Bai
          • November 2, 2012 from 8:30AM to 9:30AM
          • Come to learn how to build blazingly fast Cloud Services using new techniques and best practices. In this demo-loaded session, you’ll see how to put .Net 4.5, Windows Azure Caching, Windows Azure SDK, Server 2012 + IIS 8, CDN, Traffic Manager, as well as Service Bus at work to improve and mainta...

        • Data Options in Windows Azure. What's a developer to do?

          • Dave Campbell
          • November 2, 2012 from 8:30AM to 9:30AM
          • Remember the “good ‘ol days” when most developers developing data centric apps could take it for granted that they were going to use a relational database? Back then, the biggest question was, “What data access stack am I going to use?” Developers have a bewildering array of choice today – SQL, noSQ...

        • Bootstrapping your Startup with Windows Azure

          • Johnny Halife, Michael Washam, Nathan Totten
          • November 2, 2012 from 10:15AM to 11:15AM
          • Learn how to launch your next big idea on Windows Azure with a shoestring budget. Through real-world examples and live coding you will see how composing your application with Windows Azure services empowers you to build quickly and release sooner all while keeping costs to a minimum.
        • Developing for Windows Azure Web Sites and SharePoint Online

          • Yochay Kiriaty, Thomas Mechelke
          • November 2, 2012 from 10:15AM to 11:15AM
          • Windows Azure Web Sites is a simple and powerful hosting platform that allows developers to easily build and rapidly deploy web applications on Windows Azure using their favorite languages, frameworks, and tools. SharePoint Online brings the collaboration and productivity benefits of SharePoint to t...
        • Windows Azure Storage – Building applications that scale

          • Joe Giardino
          • November 2, 2012 from 10:15AM to 11:15AM
          • Are you interested in learning how to efficiently store petabytes of data? Write a social app that scales to billions of users? Build messaging that scales in distributed applications? Build a Windows 8 Application that stores data? If yes then this session is for you. It will cover what, when and h...

        I’ll expand the descriptions to their original content when I have more time


        <Return to section navigation list>

        Other Cloud Computing Platforms and Services

        •• Jeff Barr (@jeffbarr) reported New EC2 Second Generation Standard Instances and Price Reductions on 10/31/2012:

        imageWe launched Amazon EC2 with a single instance type (the venerable m1.small) in 2006. Over the years we have added many new instance types in order to allow our customers to run a very wide variety of applications and workloads.

        The Second Generation Standard Instances
        imageToday we are continuing that practice, with the addition of a second generation to the Standard family of instances. These instances have the same CPU to memory ratio as the existing Standard instances. With up to 50% higher absolute CPU performance, these instances are optimized for applications such as media encoding, batch processing, caching, and web serving.

        There are two second generation Standard instance types, both of which are 64-bit platforms with high I/O performance:

        • The Extra Large Instance (m3.xlarge) has 15 GB of memory and 13 ECU (EC2 Compute Units) spread across 4 virtual cores.
        • The Double Extra Large Instance (m3.2xlarge) has 30 GB of memory and 26 ECU spread across 8 virtual cores.

        The instances are now available in the US East (Northern Virginia) region; we plan to support them in the other regions in early 2013.

        On Demand pricing in the region for an instance running Linux starts at $0.58 (Extra Large) and $1.16 (Double Extra Large). Reserved Instances are available, and the instances can also be found on the EC2 Spot Market.

        Price Reductions
        As part of this launch, we are reducing prices for the first generation Standard (m1) instances running Linux in the US East (Northern Virginia) and US West (Oregon) regions by over 18% as follows:

        image

        There are no changes to the Reserved Instance or Windows pricing.

        Meet the Family
        With the launch of the m3 Standard instances, you can now choose from seventeen instance types across seven families. Let's recap just so that you are aware of all of your options (details here):

        • The first (m1) and second (m3) generation Standard (1.7 GB to 30 GB of memory) instances are well suited to most applications. The m3 instances are for applications that can benefit from higher CPU performance than offered by the m1 instances.
        • The Micro instance (613 MB of memory) is great for lower throughput applications and web sites.
        • The High Memory instances (17.1 to 68.4 GB of memory) are designed for memory-bound applications, including databases and memory caches.
        • The High-CPU instances (1.7 to 7 GB of memory) are designed for scaled-out compute-intensive applications, with a higher ratio of CPU relative to memory.
        • The Cluster Compute instances (23 to 60.5 GB of memory) are designed for compute-intensive applications that require high-performance networking.
        • The Cluster GPU instances (22 GB of memory) are designed for compute and network-intensive workloads that can also make use of a GPGPU (general purpose graphics processing unit) for highly parallelized processing.
        • The High I/O instance (60.5 GB of memory) provides very high, low latency, random I/O instance performance.

        With this wide variety of instance types at your fingertips, you might want to think about benchmarking each component of your application on every applicable instance type in order to find the one that gives you the best performance and the best value.


        Chris Talbot (@ajaxwriter) reported Google Apologizes for App Engine Outage in a 10/30/2012 post to the TalkinCloud blog:

        image‘Tis the season for cloud outages–or so it seems. Last week, Amazon Web Services (AWS) suffered a major outage in its US East-1 data center in North Virginia, and now Google (NASDAQ: GOOG) has humbly begged forgiveness for an outage in a service that wasn’t supposed to ever go down.

        On Friday, Peter S. Magnusson, engineering director at Google App Engine, posted a note to the Google Enterprise Blog in an effort to explain and apologize for an outage on Google App Engine. Here are the details: On Friday, Oct. 26, at 4 a.m. Pacific, loads started to increase in one of the App Engine data centers. By 6:30 a.m., Google had to do a global restart of the traffic routers to address the problem load in the affected data center. It wasn’t until 11:45 a.m. that App Engine had returned to normal operation.

        imageFor those missing the math, that’s that’s nearly eight hours of problems on Google App Engine, although the real problems existed between 7:30 and 11:30 a.m., during which approximately 50 percent of requests to App Engine failed.

        Magnusson did the right thing by admitting the error and apologizing to all of the developers that use App Engine to develop and manage their apps.

        image“We know you rely on App Engine to create applications that are easy to develop and manage without having to worry about downtime. App Engine is not supposed to go down, and our engineers work diligently to ensure that it doesn’t,” he wrote on Friday.

        Thankfully, no application data was lost. Application behavior was restored without any manual intervention by Google’s developers. Magnusson also noted that developers didn’t need to make any configuration changes to their applications.

        During the outage, developers using App Engine must have been pretty frustrated as they experienced increased latencies and time-out errors (I’m no developer, but time-out errors for anything get my ire up). This is a bit of an oddity for App Engine, though, as there has not been a systemwide outage since the launch of Google High Replication Datastore in January 2011.

        Magnusson wrote that Google will be proactively issuing credits to all paid applications for 10 percent of their usage in October to cover SLA violations. Customers don’t need to take action. Google will hopefully take steps to ensure this doesn’t happen again, but kudos to the company for taking steps to appease its hundreds of thousands of developer customers.

        Read More About This Topic

        Jaikumar Vijayan (@jaivijayan) asserted “Hadoop isn't enough anymore for enterprises that need new and faster ways to extract business value from massive datasets” in an introduction to his Moving beyond Hadoop for big data needs analytic article for InfoWorld’s Big Data blog:

        imageHadoop and MapReduce have long been mainstays of the big data movement, but some companies now need new and faster ways to extract business value from massive -- and constantly growing -- datasets.

        While many large organizations are still turning to the open source Hadoop big data framework, its creator, Google, and others have already moved on to newer technologies. …

        imageThe Apache Hadoop platform is an open source version of the Google File System and Google MapReduce technology. It was developed by the search engine giant to manage and process huge volumes of data on commodity hardware. It's been a core part of the processing technology used by Google to crawl and index the Web.

        Hundreds of enterprises have adopted Hadoop over the past three or so years to manage fast-growing volumes of structured, semi-structured and unstructured data. The open source technology has proved to be a cheaper option than traditional enterprise data warehousing technologies for applications such as log and event data analysis, security event management, social media analytics and other applications involving petabyte-scale data sets.

        Analysts note that some enterprises have started looking beyond Hadoop not because of limitations in the technology, but for the purposes it was designed.

        Hadoop is built for handling batch-processing jobs where data is collected and processed in batches. Data in a Hadoop environment is broken up and stored in a cluster of highly distributed commodity servers or nodes. In order to get a report from the data, users have to first write a job, submit it and wait for it to get distributed to all of the nodes and get processed.

        While the Hadoop platform performs well, it's not fast enough for some key applications, says Curt Monash, a database and analytics expert and principal at Monash Research. For instance, Hadoop does not fare well in running interactive, ad hoc queries against large datasets, he said.

        "Hadoop has trouble with is interactive responses," Monash said. "If you can stand latencies of a few seconds, Hadoop is fine. But Hadoop MapReduce is never going to be useful for sub-second latencies."

        Companies needing such capabilities are already looking beyond Hadoop for their big data analytics needs. Google, in fact, started using an internally developed technology called Dremel some five years ago to interactively analyze or "query" massive amounts of log data generated by its thousands of servers around the world.

        Google says the Dremel technology supports "interactive analysis of very large datasets over shared clusters of commodity machines." The technology can run queries over trillion-row data tables in seconds and scales to thousands of CPUs and petabytes of data, and supports a SQL-query like language makes it easy for users to interact with data and to formulate ad hoc queries, Google says. …

        Read more: 2, 3, 4, next page ›


        Jeff Barr (@jeffbarr) announced AWS Storage Gateway – Now Generally Available and New Support for Gateway-Cached Volumes in a 10/29/2012 post:

        imageThe AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage.

        We launched the Storage Gateway earlier this year. The initial release supported on-premises iSCSI volume storage (what we call Gateway-Stored volumes), with snapshot backups to the cloud. Volume data stored locally is pushed to Amazon S3, where it is stored in redundant, encrypted form and made available in the form of Elastic Block Storage (EBS) snapshots. When you use this model, the on-premises storage is primary, delivering low-latency access to your entire dataset, and the cloud storage is the backup. We’ve seen great pickup of the gateway during the beta, with many customers using the service for cost-effective, durable off-site backup.

        We are now adding support for Gateway-Cached volumes. With Gateway-Cached volumes, your storage volume data is stored encrypted in Amazon S3, visible within your enterprise's network via an iSCSI interface. Recently accessed data is cached on-premises for low-latency local access. You get low-latency access to your active working set, and seamless access to your entire data set stored in Amazon S3.
        Each Gateway-Cached volume can store up to 32 TB of data and you can create multiple volumes on each gateway. Cloud storage is consumed only as data is actually written to the volume, and you pay only for what you use. This means that you can use the Gateway-Cached volumes to economically store data sets that grow in size over time, without having to scale your on-premises storage infrastructure. Corporate directory trees, home directories, backup application data, and email archives are often well-suited to this model. Gateway-Cached volumes also provide the ability to take point-in-time snapshots of your volumes in Amazon S3, which you can use to store prior versions of your data. These snapshots are stored as Amazon EBS snapshots.

        Here's a diagram to put all of the pieces together:

        You can create and configure new volumes through the AWS Management Console. In addition to specifying the size of each new volume, you also have control over two types of on-premises storage: the upload buffer and the cache storage. Upload buffer is used to buffer your writes to Amazon S3. Cache storage holds your volumes’ recently accessed data. While the optimal size for each will vary based on your data access pattern, we generally recommend that you have an upload buffer that's large enough to hold one day's worth of changed data. If you create an 8 TB volume and change about 5% of it each day, a 400 GB upload buffer should do the trick. The cache storage should be large enough to store your active working set of data and at least as big as the upload buffer.

        We are also taking this opportunity to promote the AWS Storage Gateway to General Availability. You can use it to support a number of important data storage scenarios like corporate file sharing, backup, and DR in a manner that seamlessly integrates local and cloud storage.

        We'll be running a free Storage Gateway webinar on December 5th, 2012. You'll learn how to use the AWS Storage Gateway to backup your data to Amazon S3. You’ll also learn how you can seamlessly store your corporate file shares on Amazon S3, while keeping copies of frequently-accessed files on-premises.

        You can get started with the AWS Storage Gateway by taking advantage of our free 60-day trial. After that, there is a charge of $125/month for each activated gateway. Pricing for Gateway-Cached storage starts at $0.125 per gigabyte per month. Register for your free trial and get started today!


        Derrick Harris (@derrickharris) analyzed Rackspace versus Amazon: The big data edition in a 10/29/2012 post to GigaOm’s Cloud blog:

      • imageRackspace is busy building a Hadoop service, giving the company one more avenue to compete with cloud kingpin Amazon Web Services. However, the two services — along with several others on the market — highlight just how different seemingly similar cloud services can be.

        imageRackspace has been on a tear over the past few months releasing new features that map closely to the core features of the Amazon Web Services platform, only with a Rackspace flavor that favors service over scale. Its next target is Amazon Elastic MapReduce, which Rackspace will be countering with its own Hadoop service in 2013. If AWS and Rackspace are, indeed, the No. 1 and No. 2 cloud computing providers around, it might be easy enough to make a decision between the two platforms.

        In the cloud, however, the choices are never as simple as black or white.

        Amazon versus Rackspace is a matter of control

        image_thumb11Discussing its forthcoming Hadoop service during a phone call on Friday, Rackspace CTO John Engates highlighted the fundamental product-level differences between his company and its biggest competitor, AWS. Right now, for users, it’s primarily a question of how much control they want over the systems they’re renting — and Rackspace comes down firmly on the side of maximum control.

        For Hadoop specifically, Engates said Rackspace’s service will “really put [users] in the driver’s seat in terms of how they’re running it” by giving them granular control over how their systems are configured and how their jobs run (courtesy of the OpenStack APIs, of course). Rackspace is even working on optimizing a portion of its cloud so the Hadoop service will run on servers, storage and networking gear designed specifically for big data workloads. Essentially, Engates added, Rackspace wants to give users the experience of owning a Hadoop cluster without actually owning any of the hardware.

        “It’s not MapReduce as a service,” he added, “it’s more Hadoop as a service.”

        The company partnered with Yahoo spinoff Hortonworks on this in part because of its expertise and in part because its open source vision for Hadoop aligns closely with Rackspace’s vision around OpenStack. “The guys at Hortonworks are really committed to the real open source flavor of Hadoop,” Engates said.

        Rackspace’s forthcoming Hadoop service appears to contrast somewhat with Amazon’s three-year-old and generally well-received Elastic MapReduce service. The latter lets users write their own MapReduce jobs and choose the number and types of servers they want, but doesn’t give users system-level control on par with what Rackspace seems to be planning. For the most part, it comports with AWS’s tried-and-true strategy of giving users some control of their underlying resources, but generally trying to offload as much of the operational burden as possible.

        Elastic MapReduce also isn’t open source, but is an Amazon-specific service designed around Amazon’s existing S3 storage system and other AWS features. When AWS did choose to offer a version of Elastic MapReduce running a commercial Hadoop distribution, it chose MapR’s high-performance but partially proprietary flavor of Hadoop.

        It doesn’t stop with Hadoop

        Rackspace is also considering getting into the NoSQL space, perhaps with hosted versions of the open source Cassandra and MongoDB databases, and here too it likely will take a different tact than AWS. For one, Rackspace still has a dedicated hosting business to tie into, where some customers still run EMC storage area networks and NetApp network-attached storage arrays. That means Rackspace can’t afford to lock users into a custom-built service that doesn’t take their existing infrastructure into account or that favors raw performance over enterprise-class features.

        Rackspace needs stuff that’s “open, readily available and not unique to us,” Engates said. Pointing specifically to AWS’s fully managed and internally developed DynamoDB service, he suggested, “I don’t think it’s in the fairway for most customers that are using Amazon today.”

        Perhaps, but early DynamoDB success stories such as IMDb, SmugMug and Tapjoy suggest the service isn’t without an audience willing to pay for its promise of a high-performance, low-touch NoSQL data store.

        Which is better? Maybe neither

        There’s plenty of room for debate over whose approach is better, but the answer for many would-be customers might well be neither. When it comes to hosted Hadoop services, both Rackspace and Amazon have to contend with Microsoft’s newly available HDInsight service on its Windows Azure platform, as well as IBM’s BigInsights service on its SmartCloud platform. Google appears to have something cooking in the Hadoop department, as well. For developers who think all these infrastructure-level services are too much work, higher-level services such as Qubole, Infochimps or Mortar Data might look more appealing.

        The NoSQL space is rife with cloud services, too, primarily focused on MongoDB but also including hosted Cassandra and CouchDB-based services.

        In order to stand apart from the big data crowd, Engates said Rackspace is going to stick with its company-wide strategy of differentiation through user support. Thanks to its partnership with Hortonworks and the hybrid nature of OpenStack, for example, Rackspace is already helping customers deploy Hadoop in their private cloud environments while its public cloud service is still in the works. “We want to go where the complexity is,” he said, “where the customers value our [support] and expertise.”

        Feature image courtesy of Shutterstock user Graeme Shannon.

      • Subscriber Content

        Related research and analysis from GigaOM Pro


      • Barb Darrow (@gigabarb) asserted Amazon suit shows Google as public cloud threat in a 10/28/2012 post to GigaOm’s Cloud blog:

        imageGoogle Compute Engine may have launched less than six months ago but it’s already a serious competitor to Amazon Web Services. At least Amazon appears to think so. It just lodged a lawsuit against a former AWS sales executive who is joining Google, according to Geekwire which first reported the news.

        imageDaniel Powers, an IBM veteran, joined AWS as VP of sales in 2010, learned the business “from top to bottom” and was privy to company trade secrets, according to the suit. In June, 2012 Amazon offered him a severance package in return for a signed non-compete agreement. This suit, filed in Washington State Superior Court, charged that Powers’ decision to join Google violates the terms of that pact, which required him to stay clear of directly competitive work for 18 months.

        image_thumb11Google launched the Google Compute Engine in late June and by virtue of the company’s experience in web-scale computing, it has to be seen as a potential problem for Amazon, the biggest provider of public cloud infrastructure. Amazon is also seeing more competition come on line from Rackspace, HP, and Microsoft.

        Cloudscaling raised eyebrows a few weeks ago when it said it’s OpenStack-based private cloud will to extend into both AWS and Google public clouds. Cloudscaling execs at the time said they see a lot of customer demand for an alternative to Amazon cloud infrastructure.

        Neither Amazon nor Google could be reached for comment. If you want the nitty gritty from the suit, check out the filing below.

        Amazon vs. Daniel Powers


        <Return to section navigation list>

        0 comments: