Wednesday, July 11, 2012

Windows Azure and Cloud Computing Posts for 7/10/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Avkash Chauhan (@avkashchauhan) described Windows Azure SDK 1.7 Storage Emulator and LocalDB in a 7/10/2012 post:

imageWith Windows Azure SDK 1.7, Windows Azure Storage Emulator uses LocalDB instance specific configuration at following location:

C:\Users\<yourloginname>\AppData\Local\DevelopmentStorage\ DevelopmentStorage.201206.config

imageThe Configd XML is as below:

<?xml version="1.0"?>
<DevelopmentStorage xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2009-03-18">
<SQLInstance>(localdb)\v11.0</SQLInstance>
<PageBlobRoot>C:\Users\avkashc\AppData\Local\DevelopmentStorage\PageBlobRoot</PageBlobRoot>
<BlockBlobRoot>C:\Users\avkashc\AppData\Local\DevelopmentStorage\BlockBlobRoot</BlockBlobRoot>
<LogPath>C:\Users\avkashc\AppData\Local\DevelopmentStorage\Logs</LogPath>
<LoggingEnabled>false</LoggingEnabled>
</DevelopmentStorage>

And based above you can see that “v11.0” LocalDB instance is Windows Azure Storage Emulator specific. When DSInit starts first time it creates the v11.0 database in LocalDB. You can verify it as below:

c:\>sqllocaldb i
v11.0

If you want to choose Local SQL Express DB instead of LocalDB, you would need to change the <SQLInstance>**</SQLInstance> property to add proper Database name.

After that you would need to call DSInit (http://msdn.microsoft.com/en-us/library/windowsazure/gg433005.aspx) as below to reconfigure to Local Storage Database:

> DSInit [/sqlinstance:<DatabaseInstanceName> | /server:<Machine name> | /autodetect] [/silent] [/forcecreate]


Carl Nolan (@carl_nolan) described a Framework for .Net Hadoop MapReduce Job Submission TextOutput Type in a 7/10/2012 post:

imageSome recent changes made to the “Generics based Framework for .Net Hadoop MapReduce Job Submission” code were to support Json and Binary Serialization from Mapper, in and out of Combiners, and out from the Reducer. However, this precluded one from controlling the format of the Text output. Say one wanted to create a tab delimited string from the Reducer. This could only be done using Json Serialization. To better support allowing one to construct the final text output I have created a new TextOutput type.

imageThis TextOutput type is simple in structure. However, when this type is encountered during the serialization process, both Json and Binary serialization are bypassed and the text is written out in its raw format; including tabs and other characters usually escaped by the Json serializer.

imageAs an example here is a modified version of one of the C# Reducer samples that supports both Json and Text output:

namespace MSDN.Hadoop.MapReduceCSharp
{
[DataContract]
public class MobilePhoneRange
{
[DataMember] public TimeSpan MinTime { get; set; }
[DataMember] public TimeSpan MaxTime { get; set; }
public MobilePhoneRange(TimeSpan minTime, TimeSpan maxTime)
{
this.MinTime = minTime;
this.MaxTime = maxTime;
}
public TextOutput ToText(string format)
{
return new TextOutput(String.Format(@"({0}, {1})", this.MinTime.ToString(format), this.MaxTime.ToString(format)));
}
}
public class MobilePhoneRangeReducer : ReducerBase<TimeSpan, MobilePhoneRange>
{
public override IEnumerable<Tuple<string, MobilePhoneRange>> Reduce(string key, IEnumerable<TimeSpan> value)
{
var baseRange = new MobilePhoneRange(TimeSpan.MaxValue, TimeSpan.MinValue);
var rangeValue = value.Aggregate(baseRange, (accSpan, timespan) =>
new MobilePhoneRange((timespan < accSpan.MinTime) ? timespan : accSpan.MinTime, (timespan > accSpan.MaxTime ) ? timespan : accSpan.MaxTime));
yield return new Tuple<string, MobilePhoneRange>(key, rangeValue);
}
}
public class MobilePhoneRangeTextReducer : ReducerBase<TimeSpan, TextOutput>
{
public override IEnumerable<Tuple<string, TextOutput>> Reduce(string key, IEnumerable<TimeSpan> value)
{
var baseRange = new MobilePhoneRange(TimeSpan.MaxValue, TimeSpan.MinValue);
var rangeValue = value.Aggregate(baseRange, (accSpan, timespan) =>
new MobilePhoneRange((timespan < accSpan.MinTime) ? timespan : accSpan.MinTime, (timespan > accSpan.MaxTime) ? timespan : accSpan.MaxTime));
yield return new Tuple<string, TextOutput>(key, rangeValue.ToText("G"));
}
}
}

For the sample Reducer above the Json serialization output would be:

Android {"MaxTime":"PT23H59M54S","MinTime":"PT6S"}
RIM OS {"MaxTime":"PT23H59M58S","MinTime":"PT1M7S"}
Unknown {"MaxTime":"PT23H52M36S","MinTime":"PT36S"}
Windows Phone {"MaxTime":"PT23H55M17S","MinTime":"PT32S"}
iPhone OS {"MaxTime":"PT23H59M50S","MinTime":"PT1S"}

The corresponding Text Output would be:

Android (0:00:00:06.0000000, 0:23:59:54.0000000)
RIM OS (0:00:01:07.0000000, 0:23:59:58.0000000)
Unknown (0:00:00:36.0000000, 0:23:52:36.0000000)
Windows Phone (0:00:00:32.0000000, 0:23:55:17.0000000)
iPhone OS (0:00:00:01.0000000, 0:23:59:50.0000000)

As mentioned the actual definition of the TextOutput type is simple and is just a wrapper over a string, although depending on needs this may change:

type TextOutput(value:string) =
// Internal text value
let mutable text = value
/// String value of the TextOutput class
member this.Text
with get () = text
and set (value) = text <- value
/// Byte array value of the TextOutput class
member this.Bytes
with get() = Encoding.UTF8.GetBytes(text)
and set (value:byte array) = text <- Encoding.UTF8.GetString(value)
new(value:byte array) =
TextOutput(Encoding.UTF8.GetString(value))
new(value:TextOutput) =
TextOutput(value.Text)
new() =
TextOutput(String.Empty)
/// Clear Text
member this.Clear() =
text <- String.Empty
/// Append Text
member this.Append(value:string) =
text <- text + value
text
/// ToString override
override this.ToString() =
text
///Equals override
override this.Equals(value:obj) =
if not (value.GetType() = typeof<TextOutput>) then
false
else
let objText = (value :?> TextOutput)
text.Equals(objText.Text)
/// GetHasCode override
override this.GetHashCode() =
text.GetHashCode()

One of the main rationales for adding the TextOutput support is so that data output by the framework can be easily used by Hive CREATE TABLE statements.

Hope you find this change useful


Denny Lee (@dennylee) explained How Klout changed the landscape of social media with Hadoop and BI Slides Updated in a 7/9/2012 post:

imageOne of the key themes that Dave Mariani (@dmarini) and I were talking about during this year’s Hadoop Summit was that:

Hadoop and BI are better together

image_thumb3_thumbas I have detailed this in many previous blog posts. As noted during our session – How Klout changed the landscape of social media with Hadoop and BI – combing Hadoop and BI together is something you can do right now to accelerate your understanding of the data that you have amassed in your Hadoop cluster.

image_thumb11For more information, we have updated our Hadoop Summit slides to include additional screenshots for your reference.

How Klout is changing the landscape of social media with Hadoop and BI

imageView more PowerPoint from Denny Lee

image

As well, we have created a GitHub repository at https://github.com/dennyglee/Caprica which will contain the sample code on how to build a web/social SDK using Scala, node.js, Hadoop, Hive, and Analysis Services as we had shown during our session.

Click here for more details about the Apache Hadoop on Windows Azure preview.


Matthew Aslett posted Hadoop is dead. Long live Hadoop on 7/9/2012 to the 451 Group’s Information Management blog:

GigaOM published an interesting article over the weekend written by Cloudant’s Mike Miller about why the days are numbered for Hadoop as we know it.

imageMiller argues that while Google’s MapReduce and file system research inspired the rise of the Apache Hadoop project, Google’s subsequent research into areas such as incremental indexing, ad hoc analytics and graph analysis is likely to inspire the next-generation of data management technologies.

We’ve made similar observations ourselves but would caution against assuming, as some people appear to have done, that implementations of Google’s Percolator, Dremel and Pregel projects are likely to lead to Hadoop’s demise. Hadoop’s days are not numbered. Just Hadoop as we know it.

Miller makes this point himself when he writes “it is my opinion that it will require new, non-MapReduce-based architectures that leverage the Hadoop core (HDFS and Zookeeper) to truly compete with Google’s technology.”

As we noted in our 2011 Total Data report:

“it may be that we see more success for distributed data processing technologies that extend beyond Hadoop’s batch processing focus… Advances in the next generation of Hadoop delivered in the 0.23 release will actually enable some of these frameworks to run on the HDFS, alongside or in place of MapReduce.”

With the ongoing development of that 0.23 release (now known as Apache Hadoop 2.0) we are beginning to see that process in action. Hadoop 2.0 includes the delivery of the much-anticipated MapReduce 2.0 (also known as YARN, as well as NextGen MapReduce). Whatever you choose to call it, it is a new architecture that splits the JobTracker into its two major functions: resource management and application lifecycle management. The result is that multiple versions of MapReduce can run in the same cluster, and that MapReduce becomes one of several frameworks that can run on the Hadoop Distributed File System.

The first of these is Apache HAMA – the bulk synchronous parallel computing framework for scientific computations, but we will also see other frameworks supported by Hadoop – thanks to Arun C Murthy for pointing to two of them – and fully expect the likes of incremental indexing, ad hoc analytics and graph analysis to be among them.

As we added in Total Data:

“This supports the concept recently raised by Apache Hadoop creator Doug Cutting that what we currently call ‘Hadoop’ could perhaps be thought of as a set of replaceable components in a wider distributed data processing ecosystem… the definition of Hadoop might therefore evolve over time to encompass some of the technologies that could currently be seen as potential alternatives…”

The future of Hadoop is… Hadoop.


Robin Shahan (@RobinDotNet) completed her series with Azure for Developers Tutorial Step 7: Use Table Storage instead of a SQL Database on 7/9/2012:

imageThis is the seventh and final step of the Azure for Developers tutorial, in which we set up a WCF service running in Azure to provide CRUD operations to a client application. For more information, please check out the Introduction.

imageWe have a WCF service running in a web role that reads from and writes to a SQL Database. It submits messages to an Azure queue, and there is a worker role that retrieves the entries from the queue and writes them to blob storage. We have the diagnostics working, and we have a client that calls the service.

Why do I care?

If you have a lot of data, it’s much less expensive to store it in Windows Azure Tables than in a SQL Database. But table storage is like indexed sequential flat files from days of yore – there are no secondary indeces. You get to define a partition key for your table; Microsoft tries to keep all of the data in a partition together. You don’t want to have one partition with all of your millions of records in it – this is not efficient. But you might split the data by what country your customer is in, or by a range of customer id’s, or something like that. You also can define a Row Key, which, when combined with the partition key, makes up the primary key for the table. So if country was your partition key, the row key might be customerID, for example.

You can store different kinds of data in the same table, but this is not a good design idea, as it will confuse the people filling in for you when you’re on vacation. …

Robin continues with the C# code for her tutorial.


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

Tim Anderson (@timanderson) compared Microsoft SQL Azure versus SQL Server on Amazon AWS in a 7/11/2012 post:

imageAmazon RDS for Microsoft SQL Server offers cloud instances of SQL Server. Amazon’s offering even supports “License Mobility”, Microsoft jargon that lets volume licensing customers use an existing SQL Server license for an Amazon’s instance. But how does Amazon’s cloud SQL Server compare with Microsoft’s own offering, SQL Database running on Azure?

imagePeter Marriott has posted on the subject here (registration required). The key point: despite the obvious similarity (both are SQL Server), these two offerings are radically different. Amazon’s RDS SQL is more IaaS (infrastructure as a service) than PaaS (platform as a service). You choose an edition of SQL Server and rent one or more instances. The advantage is that you get full SQL Server, just like the on-premise editions but hosted by Amazon.

imageMicrosoft’s Azure-hosted SQL on the other hand is more abstracted. You do not rent a SQL Server instance; you rent a database. Under the covers Microsoft provides multiple redundant copies of the data, and if traffic increases, it should scale automatically, though the database size is limited to 150GB. The downside is that not all features of SQL Server are available, as I discovered when migrating data.

Marriott adds that SQL Azure supports encrypted connections and has a more usable administration interface.

A further twist: you can also install SQL Server on an Azure Virtual Machine, which would get you something more like the Amazon approach though I suspect the cost will work out higher.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData

Derrick Harris (@derrickharris) reported IDC: Analytics a $51B business by 2016 thanks to big data in a 7/11/2012 post to GigaOm’s Structure blog:

imageThe market for business analytics software grew 14 percent in 2011 and will hit $50.7 billion in revenue by 2016, according to market research firm IDC. And, that segment will grow at a 9.8-percent-a-year clip until then, IDC predicts, driven in part by the current hype around big data.

imageThe renewed importance of analytics software comes as the idea of big data has opened companies’ eyes as to the types of insights their data can provide far beyond what traditional analytics efforts yielded. Platform technologies such as Hadoop are letting companies store more data than ever before possible and crunch types of data not traditionally used.

imageAnalytics software is a key component of big data strategies because it’s the stuff that lets companies actually analyze and visualize their data. Vendors in this space are having to retool their products — many products have been around for years, if not decades – for the age of big data. By IDC’s estimates, data warehousing was the fastest growing analytics area in 2011, increasing 15.2 percent, followed by analytics applications at 13.3. percent and BI tools at 13.2 percent.

By contrast, IDC recently predicted that the almost brand new market for Hadoop software and services will grow at about 60 percent a year until 2016, reaching $812.8 million up from $77 million today. It predicted the market for big data overall (which doesn’t include the higher-level analytics software) will reach $16.9 billion by 2015, up from $3.2 billion in 2010.

Raluca Hera of the Windows PowerShell Team posted Introducing Management OData Schema Designer on 7/10/2012:

We are excited to introduce the new Management OData Schema Designer tool.

The tool’s goal is to accelerate evaluation /development on top of “Management OData IIS Extension” optional Windows Server 2012 feature. A very informative introduction about this feature can be found in the Standards based management in Windows Server 8 by Jeffrey Snover and Wojtek Kozaczynski.

The new tool has

<!--[if !supportLists]-->- <!--[endif]-->a user friendly schema creation experience (as a wizard)

<!--[if !supportLists]-->- <!--[endif]-->custom Management OData endpoint deployment capabilities.

<!--[if !supportLists]-->- <!--[endif]-->schema editing and validation capabilities

There are 2 versions available:

<!--[if !supportLists]-->- <!--[endif]-->a stand-alone version of the tool: x86/x64 (targeted at ITPros)

Prerequisites: Visual Studio Isolated Shell

<!--[if !supportLists]-->- <!--[endif]-->a Visual Studio 2010 Ultimate/Pro plugin (the same functionality as the stand-alone tool)

Please use it and let us know if it is useful to you via the “Issue tracker”.


Arlo Belshee (@arlobelshee) reported in a Announcing ODataLibrary for .Net thread of 7/10/2012 in the OData Mailing List:

imageI want to announce the release of a new OData implementation for .Net.

ODataLib is an open-source implementation of the more intricate parts of the OData protocol. It consists of a set of independent components; each is used by WCF Data Services to perform some aspect of the OData stack. This library is intended to meet three goals:

  • Provide a library instead of a framework. It does less for you than the frameworks, but it doesn't constrain your application as much.
  • Favor performance over ease of use. ODataLib tends towards low-level APIs.
  • Provide a sample implementation for things that are easy to do inconsistently.

imageODataLib includes client components, service components, and components that are useful in both. For details, see http://odata.codeplex.com/wikipage?title=ODataLib.

Documentation is pretty much non-existent for this release. I intend to start addressing that over the next couple of weeks.

You can get this release from:

This is the first of many ODL releases. I look forward to your feedback. Is there anything I can do to make it work better for you?

Arlo Belshee
Sr. Program Manager, OData, Microsoft

You can join the OData Mailing list here.

image_thumb15_thumb


<Return to section navigation list>

Windows Azure Service Bus, Caching, Active Directory and Workflow

Matias Woloski (@woloski) described Configuring SharePoint 2010 to accept Google and ADFS identities  on 7/10/2012 to his Auth10 blog (apparently reposted from 7/4/2012):

imageIn this article we will walk through the process of configuring a SharePoint 2010 application to use claims-based federated identity. This is one of the scenarios that we’ve heard a lot from customers. If you ever did this manually you probably spent at least a week trying to figuring out all the details. So many steps (some of them rather obscure) lead often to errors and a lot of time troubleshooting them. Our goal with Auth10 is to get that down to minutes, instead of days or weeks.

At a high level, these are the steps that we will follow.

  • Configure trust between SharePoint and Windows Azure Active Directory (previously known as Windows Azure Access Control Service).
  • Configure trust between Windows Azure Active Directory and Google.
  • Configure trust between Windows Azure Active Directory and ADFS.
  • Connect SharePoint with Google and ADFS.

We will do all these using Auth10 Dashboard which is a tool that leverages the Windows Azure Active Directory APIs to simplify, accelerate and promote the best practices around federated identity.

Here is a diagram of the scenario

(Original images are too large for clickable full-size screen captures; see Matias’ original post)

If you are a visual learner, then you can skip all these and watch these two screencasts of about 2-3 minutes each:

Otherwise, keep reading!

1 - Create the application

Signup to Auth10 (no credit card, it’s free).

In the dashboard, create a new application, give it a friendly name and choose “SharePoint 2010” as the application type.

Enter the URL of the SharePoint application (this URL will be used to locate the SharePoint app from the scripts later) and the attribute that will identify the user.

2- Configure SharePoint

Now you have to download a configuration package that will automate the configuration with SharePoint 2010 using PowerShell. The setup page will also show what the script will actually do. This script is generated by Auth10 and is customized with the information you supplied.

Unzip the package and execute the RunMe.cmd in the SharePoint server. It could take 5 minutes at most.

3- Create the Google user group

Go back to the Auth10 dashboard and create a user group. Give it a friendly name like “Google Users”

Notice that here you have the option to restrict the users logging in with this identity provider. This is optional but could be useful as a first level authorization for your applications (no need to touch the app). Windows Azure Active Directory won’t issue a token if the user doesn’t belong to that list.

Example: Let’s say that you have a SharePoint portal or just a simple web application and you want to give access to the group of designers who need to exchange information with your employees. You would create a user group called “Designers” and add the emails to that list. So when you connect that group with an application, only those users will be able to access that application.

4- Connect the SharePoint app with the Google user group

We have created the application and the user group, now it’s time to connect them. To do that, you can click on connect and drag and drop the app and the user group

Choose a Passthrough rule to simply copy all the attributes coming from the identity provider to the application

This is how the dasbhoard will look like after connecting the app and the user group:

5- Login to SharePoint with Google

Now it’s time to test it! Browse to the SharePoint application and you should get a screen like this:

The user gets redirected to Google. Enter the user and password.

The user has been logged in, but doesn’t have permissions to access the site with that identity yet. The portal administrator would have to give access using the People Picker for that Google email.

6- Create the Active Directory Federation Services user group

Now that we have the SharePoint application working with Google users, let’s add another user group. This time we will choose Active Directory Federation Services 2.0 (ADFS) as the authentication type.

When you configure a trust relationship with ADFS, you will need the public key and the ADFS endpoint. ADFS provides a FederationMetadata document containing that information and it’s located at https://[your_adfs_server]/FederationMetadata/2007-06/FederationMetadata.xml

ADFS will only issue tokens to registered applications (formally called Relying Parties). Once the user group has been created Auth10 will show the Setup instructions and a configuration package you can download that will contain a script that uses the ADFS PowerShell CmdLets to create the Relying Party Trust and some additional rules in ADFS. If you click “More information about this configuration package” you can see what this script will do in detail.

7- Connect the SharePoint app with the ADFS user group

As we did with Google, we have to connect SharePoint with this user group that will authenticate with ADFS. To do that, we will click the Connect button, drag and drop SharePoint and Test Company Employees AD user group and create a Passthrough mapping rule. This is how the dashboard looks like after we have the SharePoint app connected with the two groups.

8- Login to SharePoint with ADFS

The final test is to login to SharePoint using ADFS. When browsing to the SharePoint application, we now get Google and ADFS as authentication options. It’s worth mentioning that this login dialog can be customized and even have different options (based on a subdomain, a querystring, etc.)

If we choose Test Company Employees AD we will get redirected to the ADFS.

Finally a token is issued by ADFS and POSTed to Windows Azure Active Directory. WAAD will generate another token that will finally be POSTed to SharePoint and we are logged in (again with a user who doesn’t have access yet).

Conclusion

This walkthrough demonstrates a scenario of a SharePoint application being accessed by users logging in with Google and ADFS. It can be implemented easily and fast with Auth10 within minutes instead of days or weeks and using the best practices.


Manu Cohen-Yashar (@ManuKahn) described ACS and OAuth 2.0 in a 7/10/2012 post:

I was asked by a customer about the OAuth 2.0 endpoint in the ACS management portal.

image

imageWell ACS can participate in the OAuth Dance. Its role is to produce authorization code for the user's resource and then produce the actual access token that will enable a client application to access the user's resources at the resource server.

imageThere is a demo provided by the ACS team demonstrating OAuth delegation with ACS. I found a very good blog post explaining the OAuth flow of the sample in great details. I recommend to view the following 10m video to get a better understanding of OAuth before trying to understand this nice demo.


Haishi Bai (@HaishiBai2010) posted Windows Azure Caching Memcache Interoperability Tutorials 1 – Using Client Shim on 7/10/2012:

imageMemcached is an open-source, distributed in-memory caching solution that is widely used by many large-scale websites such as Wikipedia, Digg, Youtube, Flickr, Twitter, and LiveJournal. It uses a client-server architecture and a simple, open, community driven wire protocol – Memcached protocol. The protocol has two versions - a text version and a binary version. Under text protocol, clients sends commands to cache cluster as ASCII text strings and data as raw byte streams. On the other hand, binary protocol defines a simple packet format, which contains a 24-byte header, a key field, a value field, and additional fields as needed.

imageThe core operations under both versions of the protocol are “get” and “put” operations, which allows clients to save and retrieve objects, identified by keys, to and from the server cluster. In other words, a cache cluster is a distributed hash table that contains key-value pairs users put in. Memcached protocol doesn’t dictate how the keys are calculated, leaving the clients to decide how to allocate items across member servers within a cache cluster. When number of member servers is not static, client programs need to implement consistent hashing in order to maintain a balanced distribution of items. Windows Azure Caching (Preview), on the other hand, takes on the responsibility of distributing items across member servers itself so that clients don’t need to care about item distributions.

You can enable Memcached protocol support by either enabling a server-side gateway or by adding a client-side shim. The server-side gateway provides native Memcached protocol support – it listens on a Memcached socket and handles requests sent via Memcached protocol. However, because of different strategy in key management, the gateway needs to re-hash item keys so that Windows Azure Caching (Preview) can manage item distributions on the server side. This degrades performance in performance-sensitive scenarios because there’s a potential extra network hop during item distribution. The client-side shim is a protocol translator – it translates between Memcached protocol and Window Azure Caching API. There’s no extra network hop in this case, but the packets need to be repackaged to compensate for protocol differences. You can read more about the two choices and recommended usages here.

Tutorial – Using client shim

In this tutorial, we’ll start with a simple ASP.Net application that uses a third-party Memcached client to communicate with a Memcached cluster. Then, we’ll convert the application into a Windows Azure Cloud Service, which hosts a Windows Azure Caching cluster in a dedicated worker role. At last, we’ll add client shim to the application so it can communicate with Windows Azure Caching cluster as if it was still working with a Memcached cluster.

Part I – Setting up a Memcached server

This tutorial assumes you are familiar with Windows Azure Cloud Services development, but it doesn’t assume you have any prior knowledge of Memcached. So, let’s start by setting up a local Memcached cluster. You can skip this part if you already got a Memcached cluster configured.

  1. Get Couchbase Server 1.8.0 Community Edition from Couchbase’s download page. Couchbase Server is a NonSQL database server that has built-in Memcached caching support (I’ll leave it to interested readers to find out histories of Memcached, CouchDB and Couchbase).
  2. Run the setup program. The install wizard finishes with a congratulation page:image
  3. Click SETUP to configure your caching cluster. On next page, select Start a new cluster and change Per server RAM Quota to 256 (MB) – we don’t need too much for our testing purposes. Click Next to continue:image
  4. On next page, accept default settings and click Next to continue:image
  5. On next page, provide your contact info if you’d like to join the community. Otherwise click Next to continue.
  6. At last, set a Administrator password, and then click Next to finish:image
  7. Now comes the tricky part – by default Couchbase Servers use their IP addresses as identifiers. If your machine uses dynamic IP your cluster will fail when your IP address changes. Before we continue, you should update your Couchbase Server settings to use a DNS name, a static IP address, or localhost (for local testing only) to identify your server node. You can find detailed instructions here. I chose to use localhost because I was running everything locally.

Part II – Accessing Memcached cluster from ASP.Net application

  1. Launch Visual Studio 2010/2012 as an administrator (Screenshots in this tutorial shows 2012).
  2. Create a new ASP.Net MVC 4 Web Application, name the project MemcachedClientShimTutorial.
  3. Choose Internet Application template, and click OK to create the application.
  4. In Solution Explorer, right-click on the project and select Manage NuGet Packages….
  5. Search for memcache and then install EnyimMemcached from search results. This installs Enyim .Net Memcached client library.image
  6. Modify Index() method of HomeController. Replace the existing code with the following. Resolve references. The code is very straightforward – it creates a Memcached client configuration, initialize a client using the configuration, and performs a set operation and a get operation.
    public ActionResult Index()
    {
      MemcachedClientConfiguration config = new MemcachedClientConfiguration();
      config.Servers.Add(new IPEndPoint(IPAddress.Loopback, 11211));
      config.Protocol = MemcachedProtocol.Binary;
                
      MemcachedClient client = new MemcachedClient(config);
    
      client.Store(StoreMode.Set, "Test", "This is a test value");
    
      ViewBag.Message = client.Get<string>("Test");
    
      return View();
    }
  7. Run the application. If everything works out, you should see “This is a test value” on the application home page:image

Part III – Converting the ASP.Net application to a Windows Azure Cloud Service with a caching cluster hosted in a Worker Role

  1. In Solution Explorer, right-click on MemcachedClientShimTutorial project and select Add Windows Azure Cloud Service Project.
  2. Right-click on the Roles node under the newly added cloud project and select Add -> New Worker Role Project….
  3. Pick Cache Worker Role template, and name the new project CacheWorkerRole. Then, click Add button to add the project.
  4. Launch the application. You’ll see local Compute Emulator launching and the application working just as before.

Part IV – Adding client shim and redirecting client to Windows Azure Caching cluster

  1. In Solution Explorer, right-click on MemcachedClientShimTutorial project and select Manage NuGet Packages….
  2. Search for memcache, and install Windows Azure Caching Memcache Shim Preview (you probably need to flip through a couple of pages to locate the package).
  3. The installer will modify your web.config file to include settings for caching client. Open web.config file and replace [server role name goes here] with CacheWorkerRole (or to match with your cache worker role name).
  4. In Solution Explorer, double-click MemcachedClientShimTutorial.Azure –> Roles –> MemcachedClientShimTutorial node to bring up property page for the web role.
  5. Go to Endpoints tab. Observe the installer also added a memcache_default endpoint. Change its private port from 11211 to 12345 (because we are already running Memcached cluster at 11211 on the same machine):image
  6. Now go back to HomeController and edit Index() method again. The only line you need to change is to update the service address from:
    config.Servers.Add(new IPEndPoint(IPAddress.Loopback, 11211));
    to:
    config.Servers.Add(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["memcache_default"].IPEndpoint);
    Now your Memcached client has been redirected to the client shim, which in turn accesses Windows Azure Caching cluster hosted in your worker role. Indeed there’s no code changes other than changing the server address!
  7. Run the application. Chances are, now you see the home page loads without “This is a test value” message. Refresh the page and the message will come up. Why this happens? This is due to the fact that when the home page is being loaded, the hosted cache cluster has not completed initialization yet. A retry block here is probably something you want to consider for production code.

Summary

In this tutorial we set up a local Memcached server, connected to it from ASP.Net, and then created a Windows Azure Caching cluster hosted in a Windows Azure Worker Role and finally modified our Memcached client to use the new cache. In future posts we’ll go through the steps of using server-side gateway and other topics such as security.


Anton Staykov (@astaykov) described Unified Identity for Web Apps – the easy way in a 7/10/2012 article for Red Gate Software’s ACloudyPlace blog:

imageIn this article we’re going to take a look at how to protect a web application via security tokens. We already know how to configure a Windows Azure Active Directory Access Control (WAADAC, formerly known as Windows Azure Access Control Service, or just ACS), so now we’ll go through the steps involved when setting up security tokens.

imageThere are few key points you have to look after in order be happy with the results and eliminate most of the rookie mistakes. Before we begin, here’s the WAADAC demo code if you just want to play around with it. Let’s create our first web application and prepare it for token/claims authentication. In order to successfully run the solution (and complete the code if you decide to do it yourself) you need to have Windows Identity Foundation (WIF) SDK installed on your computer. Open Visual Studio and create a new ASP.NET Web Application. For the demo, my app is named “WaadacDemo”. The tool that comes with the SDK (Federation Utility tool) plugs the required modules within the <system.webServer> section of our web.config file. This section is only recognizable by IIS 7.0+ and/or IIS Express.

The first thing we need to do is to set up our web application to work with IIS Express (or local IIS) instead of using ASP.NET Development server (a.k.a. Cassini), which does not recognize and respect the <system.webServer> section. While setting this option, I also chose a specific port for our application just to make sure it runs on the same port every time. All these settings are managed in the “Web” tab under the “Properties” window of the Windows Azure Web Application project:

Local IIS Settings in Windows Azure Active Directory Access Control

We’ll write in some of the application settings so that we can go and register it as a Relying Party Application (or just RP) in our ACS namespace:

Now, create a namespace and register the application with it. You can read more on how to do this in my previous article “Online identity management via Windows Azure Access Control Service”. Add Live ID, Yahoo, and Google as Identity Providers and create the default pass-through rules for all the IdPs registered with the application. Now get the federation metadata URL from your ACS Management portal and copy it:

Application Integration

Now we add the Windows Azure Active Directory Access Control STS (Security Token Service) as an STS reference to our web application. Fortunately for us, it’s fairly easy to accomplish when we have WIF SDK installed. Just right-click on the Web Application project and select the “Add STS reference” menu item:

security token service

Follow the wizard which will ask couple of trivial questions like the location of the web.config file to be edited to inject token processing modules, and the application URI. The former is automatically filled out, while you need to enter there the URI of your application (http://localhost:2700/ as configured earlier) in the latter. When you click “next” an error message (ID1007) will pop up asking you whether you want to continue with securing an application which is not on HTTPS protocol – which is fine for our demo purposes.

Federation utility Wizard

The next screen will ask us to provide details about the STS (Security Token Service) we want to rely on. Here you need to choose the “Use an existing STS” option to make use of Windows Azure ACS. Paste the Federation Metadata URL that you took from your Access Control Service (you can find it under “Application Integration” menu on the left) and click “Next”.

security token service wizard

Next, we’re asked about encrypting the security token. Let’s leave it unencrypted for now (which is the default). Then we come to a window with offered claims by the STS, just click “Next”. The final window is a summary of all tasks that will be performed in order for our application to leverage Claims authentication. Click “Finish” and now you have a Claims ready application!

Make sure you don’t rush testing. An ASP.NET validation error will be triggered when you successfully get the security token from ACS and try to present it to the Web Application. Why is that? Remember what a security token is? It’s simply an XML structured data. Digitally signed XML data, which is being POST-ed to your “Return URL”. And the default ASP.NET validation does not like it! How do you avoid that? The default error message suggests that we set <httpRuntime requestValidationMode=”2.0” /> at web.config file, under <system.web> and disable request validation for the particular page (usually default.aspx). This would, however, kill your application’s security, exposing your return URL to a potential attacks, so you don’t want to do that.

There is another way to avoid validation error messages. You could use Microsoft’s provided CustomReqeustValidator (found in almost any lab) which is part of the Identity Training Kit. You need to set it up in httpRuntime again, but this way it would be <httpRuntime requestValidationType=”MyCustomRequestValidator, MyAssembly” />. Much better than turning off validation.

There is, however a third and less known approach, which I find particularly interesting, suggested in Erik Brandstadmoen’s blog. He suggests that you configure the Return URL to a “black whole”, which is essentially a non-existing resource, and disable the request validation for that resource. You are safe to do that in terms of switching off request validation, as these requests will not execute any of your application’s pages/handlers. So, now we have slightly different web.config, with an additional location element:

<location path="WIFHandler"> 
<system.web> 
<httpRuntime requestValidationMode="2.0" /> 
</system.web> 
</location> 

Slightly modified WIF (wsfederation) section:

<microsoft.identityModel> 
<service> 
<federatedAuthentication> 
<wsFederation passiveRedirectEnabled="true" 
issuer="https://localhost/STS/" realm="https://localhost/MyApp/" 
reply="https://localhost/MyApp/WIFHandler/" /> 

Don’t forget, however that you must match the “reply” parameter of wsFederation element with the “Return URL” parameter of your Relying Party configuration in the Access Control Service.

A final step that I suggest you explore is to play with the rules. In the provided code sample I have included an /Admin section of the site. It is protected with the standard ASP.NET role protection mechanism:

<location path="Admin"> 
<system.web> 
<authorization> 
<allow roles="Administrators" /> 
<deny users="*" /> 
</authorization> 
</system.web> 
</location> 

I’ve also included menu item to link to this location. When you run the app, you will not be able to access that part of it. In order to access it, you need to add a particular Claims Transformation rule in the ACS. Go to the ACS management portal, navigate to the Rule Groups, select the rule group which you created for your app and click “Add” to add new rule:

Save this new rule.

Try to run your application now! You will be automatically redirected to the ACS login page, where you can chose which IdP to use. Authenticate with either Yahoo or Google and your application will get a Name claim and an EmailAddress claim, along with Nameidentifier and IdentityProvider claims. If you use Live ID to sign in, your application will only get a NameIdentifier claim and IdentityProvider claim (you can read more on that claims issues in my posts on the basics of Claims and on online identity management. If you login with the configured Gmail account, you will also be able to navigate to the /Admin section of the site.

And finally, you may ask – why does the “Sign Out” link in my code work, while the Sign Out link for the application you just created doesn’t’? Well, I edited the master page (Site.Master) and replaced the default “LoginStatus” control with WIF provided “FederatedPassiveSignInStatus”, which will do the work for you.

Full disclosure: I’m a paid contributor to the ACloudyPlace blog.


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Molly Bostic described Resources to Get Started with Windows Azure Web Sites in a 7/11/2012 post:

imageWindows Azure Web Sites are a new feature that let you host web sites on Windows Azure very quickly and easily. You can have a new site up-and-running in less than five minutes! The Develop and Manage centers on WindowsAzure.com provide a variety of resources to help you get started using Web Sites.

Develop and Deploy Your Site

One of the cool things about Web Sites is that you can use a huge variety of tools to develop and deploy your app. Across the .NET, Node.js, and PHP dev centers, we’ve included tutorials that show a full range of options.

Create a Site Through the Gallery

The easiest development and deployment tools are none at all. Through the gallery in Windows Azure Management Portal, you can create and deploy a wide range of popular applications in less than five minutes, without any special software or tools beyond your web browser. Check out the tutorials for Creating a WordPress site and Creating an Orchard CMS site for examples.

Deploy with Git, FTP, or TFS

You can easily deploy sites to Windows Azure using a variety of technologies. The following common tasks and tutorials will help you get started quickly.

Develop and deploy with Visual Studio, WebMatrix, or Command-line Development Tools

If you’re developing in a .NET language, you can use Visual Studio to develop and deploy your web site. You can get Visual Studio development tools as part of the Windows Azure SDK all-in-one install. (Click the big blue button on the dev center home page to kick off the installer.)

WebMatrix is a free, lightweight Windows-based IDE that you can use to develop web sites. WebMatrix provides tight integration with the Windows Azure management Portal (you can even launch WebMatrix to edit a web site from within the portal), plus it provides useful development features including IntelliSense. Check out the tutorials for .NET, Node.js, and PHP to learn more.

The latest releases of the Windows Azure SDKs include command-line development tools that can be used on any platform—Windows, Mac, or Linux. Start with the Node.js and PHP how-to guides for details about how to get and use the tools to create, deploy, and manage your web sites.

Manage your Site

Once your site is up-and-running, the Windows Azure Management Portal provides tools you can use to manage your site. The Web Sites Manage Center provides videos and articles that will help you make the most of the available management tools. The manage center includes articles about how to configure, manage, monitor, and scale your site.

Molly is Sr. Managing Editor, Windows Azure


Brian Swan (@brian_swan) explained Configuring PHP in Windows Azure Websites with .user.ini Files in a 7/10/2012 post:

imageI wrote a post a few weeks ago (Windows Azure Websites: A PHP Perspective) in which I suggested using the ini_set function to change PHP configuration settings in Windows Azure Websites. While that approach works, I briefly want to point out in this post that you can use a .user.ini file to configure PHP in Windows Azure Websites. If you are familiar with using .user.ini files, then you can just start using them.

imageIf you aren’t familiar with .user.ini files, the approach is detailed here: .user.ini files. Basically, you create a file called .user.ini that contains PHP configuration settings and put it in your root directory (or a subdirectory if you want the settings to only apply there). For example, let’s say I wanted to turn display_errors on and change the upload_max_filesize setting to 10 megabytes. Then the contents of my .user.ini file would simply be…

display_errors = On
upload_max_filesize = 10M

One “git push azure master”, and my custom PHP configuration settings are in effect for my site…almost. The one gotcha I ran into here that the frequency with which PHP reads .user.ini files is governed by the user_ini.cache_ttl setting, which is 300 seconds (5 minutes) by default. And, since this is a master setting in the php.ini file, you can’t change it. If, however, you want to see your changes in effect right away, you can simply stop and re-start your website.

One other thing worth noting here is that you cannot change PHP configuration settings that have mode PHP_INI_SYSTEM. To see a list of settings and their modes, see List of php.ini directives.

That’s it!


Dhananjay Kumar (@debug_mode) described how to Create Windows Azure Website in 6 Steps in a 7/9/2012 post:

imageIn this post we will create a Windows Azure Website in 6 simple steps. Windows Azure Website is a new feature of Windows Azure and got introduced on 7th June.

Step 1

imageVery first you need to Login to Windows Azure Management Portal. After successful login from the left panel, click on WEB SITES.

clip_image001

Step 2

Next click on CREATE A WEBSITE

clip_image003

Step 3

You can create a WebSite in three ways

  1. Quick Create
  2. Create with DataBase
  3. From Gallery

In this post we are creating Site using Quick Create WebSite option.

clip_image005

We have provided URL and selected the region to host the website.

clip_image007

Step 4

After successfully creating website click on Name of the website.

clip_image009

On clicking DASHBOARD will be open. On Dashboard you can see all the details about the website. From quick glance section select option of Download publish profile.

clip_image011

Make sure in the Configure tab .Net Framework version is selected to V4.0

clip_image013

Step 5

Open Visual Studio and create a new ASP.Net MVC 4.0 Application.

clip_image015

Choose project template as Internet Application.

clip_image017

Step 6

You can edit website per your requirement. In this post our purpose is to learn publishing the ASP.NET MVC Internet Application in Windows Azure website. For that right click on the project and choose publish

clip_image018

Next we need to import publish file. Click on import button and choose file we downloaded previously in step 4.

clip_image020

Select publish file to import.

clip_image021

You will notice all the settings have been imported. Click on Publish to publish ASP.Net MVC Application to Windows Azure website.

clip_image023

After successful publish operation Windows Azure website will be open in default browser.


Maarten Balliauw (@maartenballiauw) described Tweaking Windows Azure Web Sites in a 7/8/2012 post (missed when posted):

imageA while ago, I was at a customer who wanted to run his own WebDAV server (using www.sabredav.org) on Windows Azure Web Sites. After some testing, it seemed that this PHP-based WebDAV server was missing some configuration at the webserver level. Some HTTP keywords required for the WebDAV protocol were not mapped to the PHP runtime making it virtually impossible to run a custom WebDAV implementation on PHP. Unless there’s some configuration possible…

imageI’ve issued a simple phpinfo(); on Windows Azure Websites, simply outputting the PHP configuration and all available environment variables in Windows Azure Websites. This revealed the following interesting environment variable:

Windows Azure Web Sites web.config

Aha! That’s an interesting one! It’s basically the configuration of the IIS web server you are running. It contains which configuration sections can be overridden using your own Web.config file and which ones can not. I’ve read the file (it seems you have access to this path) and have placed the output of it here: applicationhost.config (70.04 kb). There’s also a file called rootweb.config: rootweb.config (36.66 kb)

Overridable configuration parameters

For mere humans not interested in reading through the entire applicationhost.config and rootweb.config here’s what you can override in your own Web.config. Small disclaimer: these are implementation details and may be subject to change. I’m not Microsoft so I can not predict if this will all continue to work. Use your common sense.

Configuration parameter Can be overriden in Web.config?
system.webServer.caching Yes
system.webServer.defaultDocument Yes
system.webServer.directoryBrowse Yes
system.webServer.httpErrors Yes
system.webServer.httpProtocol Yes
system.webServer.httpRedirect Yes
system.webServer.security.authorization Yes
system.webServer.security.requestFiltering Yes
system.webServer.staticContent Yes
system.webServer.tracing.traceFailedRequests Yes
system.webServer.urlCompression Yes
system.webServer.validation Yes
system.webServer.rewrite.rules Yes
system.webServer.rewrite.outboundRules Yes
system.webServer.rewrite.providers Yes
system.webServer.rewrite.rewriteMaps Yes
system.webServer.externalCache.diskCache Yes
system.webServer.handlers Yes, but some are locked
system.webServer.modules Yes, but some are locked
Project Kudu

There are some interesting things in the applicationhost.config (70.04 kb). Of course, you decide what’s interesting so read for yourself. Here’s what I found interesting: project Kudu is in there! Project Kudu? Yes, the open-source engine behind Windows Azure Web Sites (which implies that you can in fact host your own Windows Azure Web Sites-like service).

If you look at the architectural details, here’s an interesting statement:

    • The Kudu site runs in the same sandbox as the real site. This has some important implications.
    • First, the Kudu site cannot do anything that the site itself wouldn't be able to do itself. (…) But being in the same sandbox as the site, the only thing it can harm is the site itself.
    • Furthermore, the Kudu site shares the same quotas as the site. That is, the CPU/RAM/Disk used by the Kudu service is counted toward the site's quota. (…)
    • So to summarize, the Kudu services completely relies on the security model of the Azure Web Site runtime, which keeps it both simple and secure.

Proof can be found in applicationhost.config. If you look at the <sites /> definition, you’ll see two sites are defined. Your site, and a companion site named ~1yoursitename. The first one, of course, runs your site. The latter runs project Kudu which allows you to git push and use webdeploy.

In rootweb.config (36.66 kb), you’ll find the loadbalanced nature of Windows Azure Web Sites. A machine key is defined there which will be the same for all your web sites instances, allowing you to share session state, forms authentication cookies etc.

My PHP HTTP verbs override

To fix the PHP HTTP verb mapping, here’s the Web.config I’ve used at the customer, simply removing and re-adding the PHP handler:

1 <?xml version="1.0" encoding="UTF-8"?> 2 <configuration> 3 <system.webServer> 4 <handlers> 5 <remove name="PHP53_via_FastCGI" /> 6 <add name="PHP53_via_FastCGI" path="*.php" 7 verb="GET, PUT, POST, HEAD, OPTIONS, TRACE, PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK" modules="FastCgiModule" scriptProcessor="D:\Program Files (x86)\PHP\v5.3\php-cgi.exe" 8 resourceType="Either" requireAccess="Script" /> 9 </handlers> 10 </system.webServer> 11 </configuration>


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Doug Mahugh (@dmahugh) described a MongoDB Installer for Windows Azure in a 7/9/2012 post to the Interoperability @ Microsoft blog:

imageDo you need to build a high-availability web application or service? One that can scale out quickly in response to fluctuating demand? Need to do complex queries against schema-free collections of rich objects? If you answer yes to any of those questions, MongoDB on Windows Azure is an approach you’ll want to look at closely.

imagePeople have been using MongoDB on Windows Azure for some time (for example), but recently the setup, deployment, and development experience has been streamlined by the release of the MongoDB Installer for Windows Azure. It’s now easier than ever to get started with MongoDB on Windows Azure!

MongoDB

imageMongoDB is a very popular NoSQL database that stores data in collections of BSON (binary JSON) objects. It is very easy to learn if you have JavaScript (or Node.js) experience, featuring a JavaScript interpreter shell for administrating databases, JSON syntax for data updates and queries, and JavaScript-based map/reduce operations on the server. It is also known for a simple but flexible replication architecture based on replica sets, as well as sharding capabilities for load balancing and high availability. MongoDB is used in many high-volume web sites including Craigslist, FourSquare, Shutterfly, The New York Times, MTV, and others.

If you’re new to MongoDB, the best way to get started is to jump right in and start playing with it. Follow the instructions for your operating system from the list of Quickstart guides on MongoDB.org, and within a couple of minutes you’ll have a live MongoDB installation ready to use on your local machine. Then you can go through the MongoDB.org tutorial to learn the basics of creating databases and collections, inserting and updating documents, querying your data, and other common operations.

MongoDB Installer for Windows Azure

The MongoDB Installer for Windows Azure is a command-line tool (Windows PowerShell script) that automates the provisioning and deployment of MongoDB replica sets on Windows Azure virtual machines. You just need to specify a few options such as the number of nodes and the DNS prefix, and the installer will provision virtual machines, deploy MongoDB to them, and configure a replica set.

Once you have a replica set deployed, you’re ready to build your application or service. The tutorial How to deploy a PHP application using MongoDB on Windows Azure takes you through the steps involved for a simple demo app, including the details of configuring and deploying your application as a cloud service in Windows Azure. If you’re a PHP developer who is new to MongoDB, you may want to also check out the MongoDB tutorial
on php.net
.

Developer Choice

MongoDB is also supported by a wide array of programming languages, as you can see on the Drivers page of MongoDB.org. The example above is PHP-based, but if you’re a Node.js developer you can find a the tutorial Node.js Web Application with Storage on MongoDB over on the Developer Center, and for .NET developers looking to take advantage of MongoDB (either on Windows Azure or Windows), be sure to register for the free July 19 webinar that will cover the latest features of the MongoDB .NET driver in detail.

The team here at Microsoft Open Technologies is looking forward to working closely with 10gen to continue to improve the MongoDB developer experience on Windows Azure going forward. We’ll keep you updated here as that collaboration continues!

Doug Mahugh
Senior Technical Evangelist
Microsoft Open Technologies, Inc.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Kostas Christodoulou posted Application Logo (tailored on customer demand) on 7/11/2012:

imageThis is the most recent comment in the Application Logo Sample post:

“Hi Kostas,
I wanted to find out if there is a way of opening an active screen as soon as I click on the logo. I have tried but my attempt was in vain. If it is possible, could you please help me out?
Thanks,
Darryn”

imageMy first reaction was to ask for details (typical way of buying time). But then I though I could do the changes required and post a new article. So simply put, this is the answer I propose to Darryn:

public static void AddLogo(this Microsoft.LightSwitch.Client.IClientApplication application, System.Windows.HorizontalAlignment alignment, string screenName) {
  Microsoft.LightSwitch.Threading.Dispatchers.Main.BeginInvoke(() => { LogoPlacement.AddLogo(application, System.Windows.Application.Current.RootVisual, alignment, screenName); });
}


private static class LogoPlacement
{
  internal static void AddLogo(Microsoft.LightSwitch.Client.IClientApplication application, UIElement element, System.Windows.HorizontalAlignment alignment, string screenName) {
    if (rcb != null) return;

    for (int i = 0; i < System.Windows.Media.VisualTreeHelper.GetChildrenCount(element); i++) {
      if (rcb != null)
        return;
      UIElement child = (UIElement)System.Windows.Media.VisualTreeHelper.GetChild(element, i);
      AddLogo(application, child, alignment, screenName);
    }
    if (element is Microsoft.LightSwitch.Runtime.Shell.Implementation.Standard.RibbonCommandBar) {
      rcb = element as Microsoft.LightSwitch.Runtime.Shell.Implementation.Standard.RibbonCommandBar;
      Image myImage = new Image() {
        Stretch = System.Windows.Media.Stretch.Uniform,
        Margin = new Thickness(2, 8, 14, 8),
        HorizontalAlignment = alignment,
        Cursor = System.Windows.Input.Cursors.Hand
      };
      myImage.SetValue(ComponentViewModelService.ViewModelNameProperty, "Default.LogoViewModel");
      myImage.SetBinding(Image.SourceProperty, new System.Windows.Data.Binding { Path = new PropertyPath("Logo") });
      if (!string.IsNullOrWhiteSpace(screenName))
        myImage.MouseLeftButtonUp += (s, e) => { application.Details.Dispatcher.BeginInvoke(() => application.Details.Methods[string.Format("Show{0}", screenName)].CreateInvocation().Execute()); };
      myImage.SizeChanged += (s, e) => {
        double left = (s as Image).HorizontalAlignment == HorizontalAlignment.Left ? e.NewSize.Width + 10.0 : 0.0;
        double right = (s as Image).HorizontalAlignment == HorizontalAlignment.Right ? e.NewSize.Width + 10.0 : 0.0;
        rcb.Padding = new Thickness(left, 0, right, 0);
      };
      ((Grid)rcb.Parent).Children.Add(myImage);
    }
  }

With bold I have marked the additions/changes I had to make to accommodate Darryn’s request.

See also Kostas’ related Application Logo Sample and Application Logo posts.


Kostas Christodoulou continued his series with Simple Extension Methods (part 2) on 7/11/2012:

imageIn the previous post I presented an extension method used mostly for overriding the edit and delete commands of a collection. One may ask “why do I want to do this?”. Apart from any other requirements/business logic dependent reason one might want to implement, for me there is one simple yet important reason: I don’t like at all (to be kind) the default add/edit modal windows when adding or editing an entry. It’s not a coincidence that the FIRST sample I wrote for LightSwitch and posted in the Samples of msdn.com/lightswitch was a set of extension methods and contracts to easily replace standard modal windows with custom ones.

imageMost of the times when I have an editable grid screen, selecting Add or Edit I DON’T want the modal window to pop-up, I just want to edit in the grid. Or in list and details screen I want to edit the new or existing entry in the detail part of the screen.

This is the main reason I most of the times override the default Add/Edit command behavior. And for this reason I created and use the next two extension methods:

public static void AddFocus<T>(this VisualCollection<T> collection, string focusControl, bool inCollection)
  where T : class, IEntityObject {
  collection.AddNew();
  if (focusControl != null) {
    try {
      if (inCollection)
        collection.Screen.FindControlInCollection(focusControl, collection.SelectedItem).Focus();
      else
        collection.Screen.FindControl(focusControl).Focus();
    }
    catch {
      collection.EditSelected();
    }
  }
}

public static void EditFocus<T>(this VisualCollection<T> collection, string focusControl, bool inCollection)
  where T : class, IEntityObject {
  if (focusControl != null) {
    try {
      if (inCollection)
        collection.Screen.FindControlInCollection(focusControl, collection.SelectedItem).Focus();
      else
        collection.Screen.FindControl(focusControl).Focus();
    }
    catch {
      collection.EditSelected();
    }
  }
}

So what you have to do in your MyCollectionAddAndEditNew_Execute partial method is write something like:
MyCollection.AddFocus("TheNameOfTheControlYouWantToFocus", true);
or for MyCollectionEdit_Execute partial method:
MyCollection.EditFocus("TheNameOfTheControlYouWantToFocus", true);

if you have a list detail screen the same code applies but with inCollection = false.
One could implement one method for collections and one for details. Also notice where constraints applied to T generic parameter. The class constraint is required for declaring VisualCollection<T>. The IEntityObject constraint is required so that we avoid the arbitrary casting of collection.SelectedItem to IEntityObject so as to be passed to FindControlInCollection as second parameter.

Notice that in the extension methods I don’t check if the control name exists. I just use try catch. On error (if the control name is not valid) I choose to fall-back to the default behavior. You can change this without any intellectual rights being violated.


Paul Patterson (@PaulPatterson) described Microsoft LightSwitch – Branching a Project in TFS on 7/10/2012:

imageWith the many LightSwitch demos and examples I throw out there, I usually have a go-to LightSwitch application that I use. To keep the integrity of my “original” source project, I leverage branching feature on TFS to help me keep a clean copy of my projects.

Microsoft LightSwitch – Branching a Project in TFS

imageHere is how I do this…

The first thing I do is launch into the Team Explorer panel in Visual Studio 2012….

Next, I select to poke around the source code in my TFS by clicking the Source Control Explorer link in the Team Explorer panel…

The Source Control Explorer is a great tool for navigating around TFS. Here, I’ll navigate to the project that contains the source folder I want to start with…

Next, I’ll right-click the folder, and then navigate to and select the Branch… command…

Haha. Looks like GIMP photo-bombed me in the above screen shot!

Then, in the Branch from dialog box, I give the branched version of the project a new name as well as a description…

… and then click Yes in the confirmation dialog …

Then I let Visual Studio and TFS do it’s magic…

Once done, I see that my project has been branched, and ready for use…

Yay! Now all I need to do is navigate to the solution file for my newly branched project, and start playing.

But first, I need to download the project source…

Now I am good to go. I can navigate to the solution file, double click it, and presto my project is ready to rock!

Hope you find this helpful.


Eric Erhardt of the Visual Studio LightSwitch Team (@VSLightSwitch) described Concurrency Enhancements in Visual Studio LightSwitch 2012 in a 7/10/2012 post:

imageMany problems can occur in applications when multiple users are allowed to edit the same record at the same time. Some simple applications take a “last edit wins” approach, where the last person to save their changes gets their changes applied in the end. This has the obvious problem that any user who changed the record between the time the last user read the record and saved his change will get their changes lost. This issue is usually unacceptable in business applications.

imageTo solve this issue, many applications add the ability to detect if another user has changed the same record between the time you read it and attempted to save changes to it. If a change by another user is detected, the save is aborted and a message is presented to the user saying someone else has edited the same record. In a lot of applications, this is where the concurrency features end. The user is expected to discard their changes, read the most up-to-date copy of the record, and reapply the changes.

Visual Studio LightSwitch takes concurrency one step further and allows the end user the option of “merging” his changes with the conflicting change. That way, no changes will be overwritten and the user doesn’t have to reapply all his changes. This saves a lot of time for the business user.

Concurrency in Visual Studio LightSwitch 2011

In order to detect changes, LightSwitch 2011 uses the original values of all the properties of an entity. So if you have two string properties, FirstName and LastName, a SQL update statement for the entity will look like

UPDATE Customers
    SET FirstName = 'new FirstName',
        LastName = 'new LastName' 
    WHERE ID = 2 AND
        FirstName = 'original FirstName' AND
        LastName = 'original LastName'

Notice the WHERE clause has expressions for FirstName = ‘original FirstName’ and LastName = ‘original LastName’. If someone else has changed FirstName’s value or LastName’s value, these expressions will evaluate to FALSE, which will then cause the update to fail. If the update fails, a concurrency error is raised to notify the user the record has already been changed.

Issues with Concurrency in Visual Studio LightSwitch 2011

There are three primary issues with how concurrency is implemented in LightSwitch 2011.

First, some columns cannot be compared in SQL Server. Examples of these column types are text, image, xml, etc. Since LightSwitch relies on comparing original values in the WHERE clause, it is not possible to detect concurrency conflicts on these column types. If only these non-comparable columns have been changed, LightSwitch will suffer from the “last edit wins” problem illustrated above.

Second, this approach assumes that the client is trusted to send in the correct original values. This puts extra responsibilities on the client and makes the server trust the original values coming from the client.

Third, if every update request includes all the new values plus all the original values, the size of every update request is essentially doubled. Similarly, delete requests would need all of the original values, bloating the request. For small applications this isn’t a terrible problem, but it doesn’t scale well to large applications.

Concurrency in Visual Studio LightSwitch 2012

Visual Studio LightSwitch 2012 uses the OData protocol for communication between the client and server. The OData protocol uses the HTTP ETag part of the HTTP protocol to enable concurrency conflict detection. Basically, every property that should be used for concurrency conflict detection has its original value serialized into the ETag value when the item is read. This ETag value is sent to the client along with all the values of the entity being read. A client making an update will submit this ETag value along with the updated property values to the server. The server will check to make sure the ETag is valid (i.e. no one else has made an update to the record in the meantime). If the ETag isn’t valid, the update is rejected and the user is informed that someone else has edited the same record. See Pablo Castro’s blog post “Optimistic Concurrency & Data Services” for more information on how WCF Data Services implements concurrency for the OData protocol.

Now, an astute reader will notice a fourth problem when trying to combine LightSwitch 2011’s concurrency behavior with the OData protocol. It is the same as the third problem above: doubling the size of the payload, only now the problem occurs on all read operations and not just during an update operation. Since LightSwitch 2011 wants to use all properties for concurrency conflict detection, and the ETag contains the serialized original values of all properties used for concurrency, all property values will be serialized twice in a read payload.

An even worse problem is that if your entity is too big (many properties or really long strings), it won’t fit in the ETag since ETags are commonly put in HTTP headers, which most clients and servers impose a size limit for security reasons.

Because of all these issues, LightSwitch 2012 has changed its default concurrency conflict detection.

ApplicationData

When you create a table in Visual Studio LightSwitch 2012 (or upgrade an existing application from a previous version), a generated column is added to your tables named RowVersion. The RowVersion column is a SQL Server rowversion or timestamp column. A rowversion column in SQL Server gets updated to a new value every time the record is updated. This makes it perfect to use for concurrency conflict detection since it is relatively small (8 bytes) and is guaranteed to be changed whenever any column is changed.

This solves all of our problems listed above. The overhead of detecting concurrency conflicts is now minimal – an extra 8 bytes is attached to each record that is strictly used for concurrency conflict detection. This falls well inside any HTTP header size limits and doesn’t bloat read or update requests. Whenever any column is changed, the rowversion column is updated so now all concurrency conflicts can be detected. And the server doesn’t require the client to send all of the original values with an update request. The client is only required to send the 8 byte ETag value.

The RowVersion property is not shown in the Entity Designer, but is shown in other places in the LightSwitch IDE. It was removed from the Entity Designer because it cluttered up the designer and developers can’t make any changes to it anyway. Rest assured that this property doesn’t show up in the screens you create by default. So your end users won’t know the difference.

An interesting feature you can add to your application is to detect whether a record has been changed since the last time it was read. To do this, you can create a query with @Id and @RowVersion parameters. Use these parameters to filter the records where Id == @Id and RowVersion == @RowVersion. You can pass in a record’s Id and current RowVersion values into the parameters of this query. If no record is returned, then the record has been modified (or deleted). If a record is returned, then the record must not have changed in the database.

Attached databases

When you attach to an existing database in LightSwitch 2012, there is no way for LightSwitch to generate a new rowversion column on your table. When attaching to an existing database, LightSwitch will never make modifications to that database’s structure. However, you can take advantage of these concurrency enhancements yourself by adding a rowversion column to your tables using SQL Management Studio or Visual Studio’s SQL Server Object Explorer window.

When LightSwitch attaches to or updates an external database and notices a rowversion column exists on the table, LightSwitch will use just that rowversion column for concurrency conflict detection – just like the ApplicationData source.

However in LightSwitch 2012, if you attach to an external database that doesn’t contain a rowversion column, LightSwitch will fall back to using all available columns for concurrency conflict detection. You will then have the same problems as listed above. In order to work around these issues, it is recommended to add a rowversion column to your database tables.

WCF RIA Service

LightSwitch 2012 now respects the three attributes used in WCF RIA Services to signify that a property should be used in concurrency conflict detection: TimestampAttribute, ConcurrencyCheckAttribute, and RoundTripOriginalAttribute. Any property marked with one of these attributes on your WCF RIA entity will be used for concurrency conflict detection. If your entity doesn’t have any of these attributes on its properties, then all properties will be used for concurrency conflict detection, just like in LightSwitch 2011.

OData

If you are attaching to an OData service, LightSwitch 2012 will use the ETag value provided by the back-end OData service for concurrency checking. LightSwitch will attach the back-end ETag value to the entity and flow it through to the back-end when making update and delete requests.

Conclusion

Concurrency conflict detection is a low-level technical detail that normally you don’t need to worry about. LightSwitch will do what’s right to make it work as best as possible. However, sometimes you will run into cases where you need to understand how this technical detail works in order to make your application the best application it can be.

Hopefully you now understand the concurrency enhancements that were made in LightSwitch 2012 and why they were made.


Beth Massi (@bethmassi) posted LightSwitch Community & Content Rollup–June 2012 on 7/9/2012:

imageLast Fall I started posting a rollup of interesting community happenings, content, samples and extensions popping up around Visual Studio LightSwitch. If you missed those rollups you can check them all out here: LightSwitch Community & Content Rollups. I know I’m a little late with this one, but I took a couple days vacation around Independence Day last week so that I could move into my new home (which looks great by the way)!

LightSwitch HTML Client Preview

imageThe big news in June was the announcement around our roadmap in making it easy to build HTML5-based companion clients for your LightSwitch applications. Having the option to build companion clients for tablet devices without having to know HTML5 or JavaScript is a big deal. If you know data modeling, you can use LightSwitch in Visual Studio 2012 to build out OData services that can be used by a variety of clients. The next obvious step is to provide a development experience for building these clients.

This was a big ask on our UserVoice site for a long time and I can’t tell you how hard it was to keep it a secret ;-). We are all excited that we can now talk with you about our plans and get feedback from you on this early preview. It’s amazing how overwhelmingly positive the feedback has been so far. Get the bits and all the resources you need here:

Microsoft LightSwitch HTML Client Preview for Visual Studio 2012

Download: Microsoft LightSwitch HTML Client Preview for Visual Studio 2012 (COMING SOON)The preview is a pre-built VHD that comes with a walkthrough tutorial that guides you through the experiences that are working so that you can get an idea of where we’re headed and let us know what you think. There are a lot of great conversations going on in the forums and we’re waiting for your feedback! Just check out some of the great comments on the forums, twitter and our team blog:

  • @ADefWebserver: The #LightSwitch HTML client has enough features now to qualify as a groundbreaking product.
  • @progalex: The #LightSwitch #HTML client is amazing. Hey @VSLightSwitch you did an impressive job!
  • @janvanderhaegen: (Chandler voice) Could the new #LightSwitch #HTML5 BE any more intuitive ?
  • @jpbayley9: Looks like a pretty handy RAD tool #lightswitch #TEE12
  • @mikaelsand: #Lightswitch looks more modern than before. Not quite metro but close #TEE12
  • Great! Download - Install - Play - #Awesome! - Edgar Walther
  • I am impressed by the availability of HTML Client and want to try LightSwitch for enterprise apps development. - alon
  • Never had so much pleasure as I have now develop applications using lightswitch. Perfect !!! :) – Marden LR
  • This has sold me. I'm going to start working with Lightswitch, and see what I can do with it. – Michael
  • Forum thread: Kudos

Thanks to everyone -- early adopters, community rock stars, and lots of new faces – for helping to make LightSwitch even better!

LightSwitch at TechEd

At TechEd North America Jason Zander announced the LightSwitch HTML client and a couple weeks later at TechEd Europe we made the bits available. Jason showed off some quick demos in both of his keynotes that got us a lot of great press. Jay Schmelzer (Director for Visual Studio BizApp tools and also my manager) also did a few sessions and was interviewed on the Visual Studio Toolbox show. Check out some of the TechEd content we have available online:

LightSwitch in the News

As a fallout from all the hype around TechEd, the press picked up our announcement and wrote up some news articles which helped get the word out. Some of them were:

LightSwitch on Channel 9

LightSwitchHTML_220[1]I sat down with Joe Binder, the Senior PM working on the LightSwitch HTML Client, to shoot a video that we launched onto the home page of Channel 9 that got over 25,000 views the first week! In this interview, Joe walks through the design experience and shows us how easy it is to customize the styling and controls.

Channel 9 Interview: Early Look at the Visual Studio LightSwitch HTML Client

I recommend watching one of the high quality recordings here so you can see the demo better:

MSDN Magazine Column: Leading LightSwitch

June 2012Jan van der Haegen continues his journey into the depths of LightSwitch with his regular column in the June issue:

Leading LightSwitch: Tales of Advanced LightSwitch Client Customizations
Enjoy these tales of creating custom applications that show off the versatility and ease of use LightSwitch offers. You will also get a glimpse of how a real pro works with clients.

More Notable Content this Month

Extensions (see all 89 of them here!):

Xpert360 keeps cranking out the LightSwitch extensions! In June they released an extension that lets you clone screens and queries. This has been a popular request from the community and it’s awesome to see Xpert360 fill this gap:

Samples (see all 82 of them here):

Team Articles:

Community Content:

LightSwitch Team Community Sites

Become a fan of Visual Studio LightSwitch on Facebook. Have fun and interact with us on our wall. Check out the cool stories and resources. Here are some other places you can find the LightSwitch team:


Return to section navigation list>

Windows Azure Infrastructure and DevOps

David Linthicum (@DavidLinthicum) recommended “Don't skimp on the steps necessary to ready your business for cloud computing” in a deck for his 3 ways to prep for a move to the cloud post of 7/10/2012 to InfoWorld’s Cloud Computing blog:

imageA move to the cloud requires a certain amount of prep work in the enterprise. If you listen to the spin from those who provide cloud services and technology, it's no big deal. I'm here to tell you it is. To ready your enterprise for public and private cloud adoption, you need to focus on three key areas:

  • Becomint service-aware
  • Dealing with distributed security
  • Upgrading skills

imageBecoming service-aware is not the same as becoming service-oriented, but if you're service-oriented, you're already service-aware. This is key to understanding how to work with clouds, which typically use APIs -- in other words, services. Thus, you need to understand how to deal with services, as well as how to alter the infrastructure to interact with those APIs/services, such as provisioning, management, and storage. Most enterprises don't have a clue here, so they move to cloud-based systems that are mostly driven through the innovation of APIs/services. This is like purchasing a new TV without a remote control in the box.

Dealing with distributed security means you need to understand that security is no longer just having user IDs and passwords to get to resources. Instead, you must deal with identity-based security for the many moving parts comprising cloud computing, including the use of services/APIs.

Upgrading skills is where enterprises typically drop the ball. Your staff needs to understand what's changing with the enterprise's use of cloud-based resources. Yes, this means hiring new people and firing people who don't step up to make this new technology a success. Make sure you create a human resource plan that's part of your cloud computing strategy.

None of these tasks is easy. They take time and money. However, if you're looking to push cloud computing into an organization that's not ready to deal with it, you won't find any value in cloud computing. Indeed, it will turn into a risk for the business.


Bruno Terkaly (@brunoterkaly) explained Why Platform as a Service will rule the world in a 7/10/2012 post:

Introduction
001

  1. imageAutomation is taking over the world.
  2. Machine Learning, Smart Robots is what humankind can expect more of.
  3. A radical technology revolution is fast replacing human beings with machines in virtually every sector and industry in the global economy.
  4. As unpleasant as this may sound to some, technology doesn’t wait for anybody.
  5. Our job as technologists is to stay smarter than the robots. “Keep your friends close, keep your robots even closer”
  6. This ever increasing use of smart software to replace humans is occurring throughout the industrialized world.
  7. Even developing nations are working with global companies to build state-of-the-art high-tech production facilities that are supremely efficient.
  8. Cloud computing is once such area that stands to gain tremendously with this phenomenon. Traditional IT shops are redeploying IT workers to other activities that provide better business value.
  9. Microsoft is working hard to improve automation in the cloud.
  10. It is call[ed] Platform as a Service and leverages Microsoft's years of experience running large web properties.
    • Some have postulated that Microsoft was too innovative, that PaaS is too radical a departure for what developer and architects are used to
  11. Let's explore some quick differences between Infrastructure as a Service and Platform as a Service.
    1. But what really interests me are some of the deeper details which I will provide about exactly the level of automation you can expect when working with Azure.
  12. I hope to convince you why PaaS is inevitable and destined for greatness.
    • This is not about Microsoft today but all companies that innovate and find ways to let developers focus on their applications and not worry about all the cloud plumbing details.

IaaS and PaaS – Do you know the difference?

002
  1. imageWe will address both technologies, but there are many more nice things to say about PaaS.
  2. Historically, cloud computing has been about IaaS.
  3. But IaaS lacks the automation of PaaS.
  4. We'll do a brief discussion about IaaS.
  5. PaaS still leverages VMs but PaaS does so much more.

IaaS – Less automation

003
  1. With Infrastructure as a Service (IaaS), developers must directly interact with a portal or execute scripts for VMs to be created.
  2. A virtual machine (VM) in Windows Azure is a server in the cloud that you can control and manage.
  3. After you create a virtual machine in Windows Azure, you can delete and recreate it whenever you need to, and you can access the virtual machine just as you do with a server in your office.
    1. Virtual hard disk (VHD) files are used to create a virtual machine. Virtual machines are typically abbreviated VMs.
  4. There is little or no automation with respect to leveraging IaaS technologies (relative to PaaS).
    • These low level tasks need to be done at the portal or through scripting.
  5. There are scripts for Windows, Macintosh, and Linux. You can download the scripts here: http://www.windowsazure.com/en-us/manage/downloads/
    • Commands to manage your account information and publish settings
    • Commands to manage your Windows Azure virtual machines
      • Deploy a new Windows Azure virtual machine. By default, each virtual machine is created in its own cloud service; however, you can specify that a virtual machine should be added to an existing cloud service through use of the -c option
    • Commands to manage your Windows Azure virtual machine endpoints
      • Create, delete and list endpoints
    • Commands to manage your Windows Azure virtual machine images
      • Get a list of available images
    • Commands to manage your Windows Azure virtual machine data disks
      • To create images, you can either capture an existing virtual machine or create an image from a custom .vhd uploaded to blob storage
    • Commands to manage your Windows Azure cloud services
      • Create, delete and list available cloud services
    • Commands to manage your Windows Azure certificates
      • Create, delete and list available endpoints
    • Commands to manage your websites
      • List, create, browse, show details for Azure web sites
  6. Generally speaking, developers telnet into the computer hosting the VM and download software and execute installation procedures.
    • The portal offers a limited degree of automation.
  7. Choose from a library of VMs
    • The portal allows developers to select from libraries of pre-inialized VMs containing:
      • Windows Server 2008/2012, CentOS, OpenSUSE, SUSE, Ubuntu, and more.
  8. The process of deploying and scaling with IaaS boils down to cloning pre-configured VM instances.
    • Entire VMs are built-up from scratch and then replicated as needed to reach the needed scale.
  9. The VMs have only the base operating system.
    • It is very common to install many additional packages, such as:
      • Apache for web
      • PHP for the web server environment
      • Drupal for CMS. maybe WordPress.
      • MySQL, MongoDB, etc
  10. The upside of IaaS
    • More control over custom configurations
      • But the price is that more administrative tasks needed.
  11. Many developers feel more comfortable setting up their own VMs and scaling those.
  12. What is missing from IaaS solution is the beauty of an autonomous robot (Fabric Controller) doing things for the developer is simply not there.
  13. With that said, the lines of automation are blurry as IaaS does provide some level of abstraction above a raw VM and is increasing in scope. …

Bruno continues with more off-the-wall topics.


Lori MacVittie (@lmacvittie) asserted “Tools are for automation, devops is for people” in an introduction to her Devops is Not All About Automation post of 7/9/2012 to F5’s DevCentral blog:

imageIt’s easy to get caught up in the view that devops is all about automation. That’s because right now, most of the value of devops and repeatable processes is focused on deployment of applications within virtual or cloud computing environments and dealing with that volatility requires automation and orchestration to combat the growing dearth of human resources available to handle it.

blue devopsBut devops isn’t about the environment, or the automation. Those are just tools, albeit important ones, to achieving devops. Devops is more about the agility and efficiency gained through streamlining processes and being able to react rapidly. You know, agile. It’s about iterating over processes and refining them to make them more efficient. You know, agile.

Devops is about continuity; about ensuring continuous delivery. More often than not this focuses on automated and integrated deployment processes enabling rapid elasticity, but that’s just the most obvious use case. Not every process can be automated, nor should they be. Agility is about being able to react; to have processes in place that can efficiently and effectively address challenges that crop up from time to time.

devopsmaturitymodel_thumb[3]The programmability of infrastructure, for example, can enable devops to put into place processes that define how IT reacts to emerging threats. This is one of the promises of SDN and OpenFlow – that the network can adapt to external pressures and events through programmatic intervention. With generally new and unknown threats, there’s no immediate remediation available and no button operations can push to deploy a preventive measure against it. Programmatic intervention is necessary. But who is responsible for intervening? That’s part of the question devops should be able to answer.

AN EXAMPLE

If we take as an example the typical response to an emerging threat, say a 0-day threat, we can see how devops applies.

Initially, organizations respond by panicking (more or less. The agitated state of a security professional upon learning about the threat appears similar to panicking in the general population). The response is unpredictable and reactive. If the threat is in the application or application server infrastructure layers, no one’s quite sure who is responsible for handling. The threat may remain real and active for hours or days before someone figures out what it is they’re going to do.

In a more mature devops stage, experience may have taught operations what to do, but response is still reactive. Operations may not always proactively monitor or scan for potential threats and thus may be caught off-guard when one suddenly appears on the threat radar. The process for handling, however, is repeatable on a per-service basis.

As organizations continue to mature, standards evolve regarding how such threats are handled. Potential threats are monitored and processes are in place to manage eventual emergence. Responsibility is well understood and shared across operations and development. Operations understands at this point how what stop-gap measures – such as network-side scripts to prevent penetration of emergent application layer threats – are likely to be needed, and development and administrators understand which of these threats must be addressed by whom, and at what point in the process they must be mitigated.

Quantifying metrics for success follows, with both development and operations using the same metrics. Time to initial redress, time to complete resolution, time at risk, etc… Finally optimization – streamlining – of processes can begin as devops fully matures. Substitution of automated scanning and virtual patching for scanning and manual mitigation occurs, speeding up the process as well as assuring a higher security profile for the entire organization.

Most of this maturation process does not involve nor require automation. Most of it requires people; people who collaborate and define processes and policies that govern the delivery of applications. Devops must first define and refine processes before they can be automated, thus automation is unlikely to occur except in the most mature of devops-enabled organizations.

In many cases, processes will still comprise manual steps. Some of those steps and tasks may be automated at a later date, but until an organization is fully invested in devops and allows the maturation process to occur organically (with guidance of course) automation may result in nothing less than what developers and architects got with SOA – lots of duplication of services that made the entire system more complex, more costly to manage, and more difficult to troubleshoot.

Devops is a verb. It’s not something you build, it’s something you do.


JP Morgenthal (@jpmorgenthal) asserted Cloud Should Be Defined By What It Will Become, Not What It Is Today in a 7/9/2012 post:

imageThere’s been a lot of discussion about what makes cloud computing different than other forms of computing that have come before. Some refer to the set of attributes set forth by NIST, while others rely on less succinct qualifications simply satisfied to identify network accessible services as cloud, and others define cloud by applicable business models. In the past, I have written about scale as a common abstraction based on upon some of these other definitions. However, more recently, I’ve come to the realization that we need to define cloud by where it’s going and not what it is in its infancy.

Cloud computing is following in the vein of the automobile and fast food industries. These industries introduced their first products with little to no customization and then changed and competed on value based upon significant customization. The automobile industry started out offering only a black Ford Model T and today allows buyers to order a completely custom designed car online delivered to their home. Likewise, cloud computing started out as vanilla infrastructure services and is rapidly moving towards greater levels of customization. Ultimately, cloud computing will not be defined by service model monikers, but will be a complete provision, package and deliver (PPD) capability facilitating control over the type of hardware, operating systems, management systems, application platforms and applications.

When building a new home, buyers go through a process of choosing carpeting, fixtures, countertops, etc., but ultimately, their expectations are that they will be moving into a completed house and not showing up to a pile of items that they then need to further assemble themselves. This is the perspective that we should be applying to delivery of cloud computing services. Consumers should have the opportunity to select their needs from a catalog of items and then expect to receive a packaged environment that meets their business needs.

Much of today’s cloud service provider offerings either approximate raw materials that require additional refinement or are a pre-configured environment that meets a subset of the overall requirements needed. The former approach assumes that the consumer for these services will take responsibility for crafting the completed service inclusive of the supporting environment. The latter approach simplifies management and operations, but places restrictions on the possible uses for the cloud service. Both of these outcomes are simply a result of the level of maturity in delivering cloud services. Eventually, the tools and technologies supporting PPD will improve leading to the agility that epitomizes the goals for cloud computing.

Meeting the goals for PPD entails many prerequisite elements. Chief among these is automation and orchestration. Cloud service providers manage pools of resources that can be ‘carved’ up many different ways. Due to the complexity in pricing and management, most cloud service providers limit the ways this pool is allocated. As the industry matures, service providers will improve at developing pricing algorithms and have greater understanding for what consumers really need. Meanwhile, we will see great improvements in cloud manager software that will facilitate easier management and allocation of resources within the pool allowing for much more dynamic and fluid offerings. Coupled with automation and orchestration, cloud service providers will find it easier to offer consumers greater numbers of options and permutations while still being able to balance costs and performance for consumers.

Defining cloud computing by its nominal foundations is akin to specifying the career choice for a young child. Infrastructure, platform and software services illustrate possibilities and solve some important business problems today. However, most businesses still find cloud environment too limiting for their mission critical applications, such as manufacturing and high-volume transactions. It won’t be long, though, before users can specify the speed of the network, the response requirements for storage, the security profile, number and types of operating system nodes and quality-of-service parameters that the environment must operate under, among many other attributes, and have the service provision, package and deliver to us our requested virtual environment. This is what we should be using as the profile by which we define cloud computing.


David Linthicum (@DavidLinthicum) asserted “As hype dies down, companies are seeing real success in their cloud deployments, and money is pouring in” in a deck for his Cloud computing moves from fad to foundation article of 7/9/2012 for InfoWorld’s Cloud Computing blog:

imageDespite some setbacks, such as the recent Amazon Web Services outages, cloud computing is beginning to cross from the experimental phase to production systems that businesses can rely on. This has not been an overnight occurrence: Enterprises have been quietly getting smart about cloud computing technology and applying it where appropriate.

imageDespite years of cloud hype by vendors, you rarely hear about enterprise successes. That's because when enterprises make cloud computing work, they view the application of the technology as a trade secret of sorts, so there are no press releases or white papers. Indeed, if you see one presentation around a successful cloud computing case study, you can bet you're not hearing about 100 more.

The hype is waning, replaced by real deployments. There are two reasons for that success.

  1. Developers are driving much of the success because they embraced the cloud computing model early and transferred their skill sets to the cloud. They do so for selfish reasons: better pay. However, I've found them to be true believers in this technology. They're learning to bring together business applications and cloud computing. Cloud computing providers understand this situation and have spent millions to attract developers to their respective cloud platforms.
  2. Moving to the cloud is now politically correct. Just a few years ago, most people in IT viewed the rise of cloud computing as a threat. Now it's more acceptable to use public or private cloud computing resources. Indeed, it's become riskier not to have some cloud computing projects in your shop.

You see this in the amount of money invested in cloud computing -- it's accelerating, with no sign of tapering off. Companies and investors put their money where the value is.

With the release of Google Compute Engine, as well as recent cloud computing technology announcements from Hewlett-Packard and the telcos, it's clear that companies large and small are making the commitment. There are now careers, share prices, and bonuses tied to the success of cloud computing in a big way. Those vested in it will hustle like hell to make cloud computing work for them. Some will fail, but the sheer amount of money riding on this technology will ensure that it functions well in the longer term.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Microsoft (@WPC) announced New Cloud Opportunities for Partners at the Worldwide Partners Conference (WPC) 2012 according to a 7/10/2012 press release:

imageDuring the second day of Microsoft Corp.’s annual Worldwide Partner Conference (WPC), top executives from the company announced new training, tools and other programs that enable partners to deliver compelling new cloud services to their customers. Satya Nadella, president of the Server and Tools Business, announced a community technology preview (CTP) of new technologies that enable hosting service providers to use their Windows Server data centers to deliver capabilities consistent with services running in Windows Azure. In addition, he announced a new program that gives partners guidance, training and software tools to help customers transition from VMware’s virtual infrastructure to Microsoft’s cloud.

“We’ve taken everything that we’ve learned from running data centers and services at a global scale to usher in the new era of the cloud OS,” Nadella said. “Microsoft offers partners modern yet familiar technology to meet customer demand on their path to the cloud.”

imageWith the new CTP, hosting service providers can offer customers turnkey cloud services, including high-scale websites and virtual machine hosting with an extensible self-service portal experience. These capabilities, which run on Windows Server 2012 and Microsoft System Center 2012, will offer hosting providers some of the same experiences and services recently announced by Windows Azure. Go Daddy, the largest global Web hoster, is piloting these new capabilities to deliver new cloud services for customers.

“Customers view Go Daddy as an IT partner with which they can grow,” said Scott Brown, vice president of Product Development – Hosting at Go Daddy. “These new capabilities give customers a seamless path to expanding their online presence. In addition, the improved site performance, scalability and availability all lead to a more enjoyable experience for our customers and their visitors.”

image_thumb2In addition, the new program announced on stage, Switch to Hyper-V, will allow partners to grow their virtualization, private and hybrid cloud computing practices while also helping customers improve IT agility at a lower cost with Microsoft’s cloud infrastructure.

imageAlready, partners are making significant progress in helping their customers with this transition. Microsoft Gold Certified Partner FyrSoft recently helped Iowa-based Pella Corp. migrate nearly 100 percent of its VMware infrastructure — nearly 700 VMware virtual machines — to Hyper-V, moving the company beyond virtualization to a private cloud solution. With the Microsoft private cloud, Pella has evolved its business while reducing IT costs and improving efficiencies. Server and Tools Business Corporate Vice President Takeshi Numoto further details partner opportunities in the era of the cloud OS on a blog published today, and more information can be found here.

In addition, in a keynote that further reinforced how Microsoft is working with its partners to transform businesses throughout the world, Microsoft Business Solutions President Kirill Tatarinov highlighted the incredible opportunity in the year ahead for partners focused on selling business solutions based on Microsoft Dynamics.

“Microsoft brings together technologies in a way that no other company can match,” Tatarinov said. “Microsoft Dynamics takes full advantage of the amazing innovations Microsoft is delivering, and we’re actively supporting our partners in developing and delivering a complete, modern, flexible and cloud-based business solution to grow their businesses. There’s never been a better moment to be a Microsoft Dynamics partner.”

With a renewed focus on building enterprise partnerships, Microsoft announced new global independent software vendors that are choosing or extending their solutions across Microsoft Dynamics. Companies such as Campus Management Corp., Cenium Inc., Cincom Systems Inc., PROS Pricing and Technosoft that are industry leaders in their markets are embracing the Microsoft Dynamics solutions to expand their offerings and in some cases as the core foundation on which to build their unique industry-focused solutions. For instance, global hospitality and hotel solution organization Cenium is extending its Microsoft Dynamics-based business offerings in areas such as property management, procurement, human resources and point of sale; and Campus Management, a leading provider of enterprise software solutions for higher education, is planning to expand global reach by leveraging Microsoft Dynamics AX and providing institutions of any size or complexity more choices when it comes to student information systems and enterprise resource planning solutions.

Other keynotes included the following news and momentum updates from Microsoft senior executives:

• Thom Gruhler, corporate vice president of Windows Phone Marketing, took the stage to demo Windows Phone 8 and highlight that Windows Phone is now a true extension of the Windows that 1 billion users worldwide know and use today.

• Laura Ipsen, corporate vice president of Worldwide Public Sector, provided an overview of Microsoft’s National Plan and citizenship efforts, including empowering youth and driving societal change through the proliferation of Microsoft technology.

Sounds a lot like pieces of the ephemeral Windows Azure Platform Applicance (WAPA) to me. According to the Bringing Windows Azure Services to Windows Server for Hosting Providers page:

Before downloading, plan to have at least four virtual machines running Windows Server 2012 or or Windows Server 2008 R2 operating system in addition to the System Center 2012 SP1 CTP2 VHD. Install .NET Framework 3.5, .NET Framework 4 and all updates on these virtual machines.

Read the quick start guide or detailed step-by-step installation guide to complete the installation.

Free Download

By downloading and using the Web Platform Installer (WebPI), you agree to the license terms and privacy statement for WebPI. This installer will contact Microsoft over the Internet to retrieve product information.

To enable the Virtual Machines IaaS scenarios:

  1. Download the System Center 2012 SP1 CTP2 VHD here.
  2. Download the System Center 2012 SP1 CTP2 - Service Provider Foundation Update for Service Management API (SPF) here.
  3. Install and configure SPF per the deployment guide here.

Resources: Forum

My Sessions at WPC 2012 Containing the Keyword “Azure” post of 7/7/2012 provides a list of all 22 WPC 2012 sessions containing “Azure” in their title or description.

image


Clint Edmonson (@clinted) posted Virtual Machine Materials from Microsoft Cloud Summit 2012 in Dallas on 7/10/2012:

imageWhat an awesome turnout today down in DFW! Great crowd and awesome questions.

Here are the slides from my session: Windows Azure Virtual Machines

You can also download my demo script if you want to walk through the demos on your own.

Clint’s slides provide a great quick tour of WAVMs.


Mary Jo Foley (@maryjofoley) reported Microsoft to bring new Azure cloud services to Windows Server in a 7/10/2012 to ZDNet’s All About Microsoft blog:

imageMicrosoft is bringing some of its newly-announced Windows Azure services -- like virtual machine hosting and Web site hosting -- to Windows Server.

Microsoft announced immediate availability of Community Technology Preview (CTP) test builds of these services during the Day 2 keynote at the Microsoft Worldwide Partner Conference in Toronto.

imageThe new services coming to Windows Server are aimed at service providers and hosting partners. (At some point Microsoft might expand the target audience to other customers, but for now, that's the core audience, officials said.)

imageThe CTPs are versions of four of the same new services that Microsoft announced last month as part of its Azure spring updates.

Microsoft recently announced it was expanding its Windows Azure from a more-or-less pure platform-as-a-service (PaaS) play to a combined PaaS and infrastructure as a service (IaaS) play. The hosted VM capability will allow users to run Windows Server, Linux, SQL Server and SharePoint (and apps built on these platforms) on Windows Azure. Microsoft now is bringing these IaaS scenarios to Windows Server datacenters.

"We're striving to have consistency across three key areas: Customer datacenters, service providers' datacenters and our datacenters," said Ian Carlson, Director of Product Marketing.

The hosted virtual machine CTP is designed to run on Windows Server 2012 and System Center 2012 Service Pack 1. The high-density Web Sites one works on Windows Server 2008 R2 and later. I'm still checking on the service management portal and API CTP requirements.

The service management portal and interface offer hosters a way to differentiate, noted officials with Apprenda, one Microsoft partner that offers "private PaaS" through its recently announced Apprenda Azure product.

"Azure is moving from a product, to an effort," said Rakesh Malhotra, Vice President of Product with Clifton Park, NY-based Apprenda. "You can consume and acquire services through the portal."


Kevin Remde (@KevinRemde) summarized and linked WPC Announcements–Servers and Tools and Azure! (Oh my!) in a 7/10/2012 post:

imageSome of you may be aware that this week another big Microsoft conference is happening in Toronto. The Worldwide Partner Conference (WPC) for 2012 is in full swing. And like all of these kinds of conferences, there are usually big announcements and reveals and important information shared during the keynotes. This year’s is no exception.

Digital Worldwide Partner ConferenceYesterday (WPC Day 1), for example, Microsoft announced the timing of the release of Windows 8 and Windows Server 2012, and introduced partners to new options for selling Office 365. And today (WPC Day 2) Microsoft announced some additional important resources and opportunities for partners regarding cloud hosting and helping to move your customers from VMware to Hyper-V virtualization.

imageHere are the links to all of the details.

And in case you’re not watching it already, here is the “Digital WPC” site, which has live streaming and is where you’ll find access to recorded content. It will be a great place to go if you want to watch the keynote recordings when they’re available.


Kristian Nese (@KristianNese) posted Introducing SMB 3.0 with Virtual Machine Manager on 7/10/2012:

imageVirtual Machine Manager 2012 SP1 | Adding a SMB 3.0 File Share to your Hyper-V Cluster

It’s been a long time since I’ve blogged detailed about Virtual Machine Manager.

I am a virtualization dude by heart, and I have spent most of the time on Hyper-V in Windows Server 2012 lately, and the rest of the components in System Center 2012.

But as we’re getting close to release of System Center 2012 SP1 (I guess, since Windows Server 2012 was announced to be RTM the first week in August today at WPC), it’s time to dive into the details once again.

Since there`s some major new features and changes in Windows Server 2012, and most of it is [bound] very tightly to the Hypervisor, we will see Virtual Machine Manager adopt these asap.

Today, we’ll take a look at the SMB 3.0 protocol, and how it`s being used by VMM to create flexible Fabric solutions.

SMB3.0

So what exactly is SMB3.0?

Server Message Block protocol operates as an application layer network protocol, for providing shared access to serial ports, printers and files. You have most likely been using this protocol for decades in your network. One of the good things with SMB3.0 in Windows Server 2012 is that you can now run virtual machines and SQL user databases from a SMB 3.0 file share.

Needless to say, this will create some new options for your private cloud to host VMs.

And when we add Multi-channel and RDMA to the table, this will actually be able to scale out beyond traditional datacenter implementations.

To be able to scale out a SMB 3.0 share, you would need a Failover Cluster with the SMB Scale Out File Server role. I will blog more about this in the future, but as an easy overview, the Hyper-V hosts will access the SMB share on the cluster, using every possible network route. This will also introduce us to something called CA – Continuous Availability– meaning no downtime for your VHDs.

Scale-Out File Server is designed to provide scale-out file shares for server applications.

Benefits of using Scale-Out File Server in Windows Server 2012:

Increased bandwidth by using the total bandwidth of all cluster nodes in the Scale-Out File Server Cluster. You`ll notice this during cluster creation when you add the Scale-Out File Server role, that you don’t assign an IP address to the cluster. You only define the subnet. This means that every possible route to the cluster will be used for maximum performance, and is quite simple, cheap and easy to scale out by adding more routes/servers/NIC`s.

This leads to the term “Active-Active file shares” since all nodes in the Scale-Out File Server Cluster can accept and serve SMB client requests, also known as Continuous Availability since this provides transparent failover during planned – and unplanned downtime.

Scale-Out File Server role is built upon Clustered Shared Volumes, meaning that you create your file shares on a CSV. This will also give some of the other new benefits like CHKDSK with zero downtime on your CSV (this is independent of the Scale-Out File Server role), without any impact on your applications. Another neat feature is the CSV cache for increased performance in your virtual environment, especially for VDI scenarios.

To summarize a bit before we focus on Virtual Machine Manager, we can recommend using Scale-Out File Server for the ability to scale in an easy, reliable, cost effective and reliable manner. It`s not recommended to use this role for workloads that generates a lot of metadata operations such as typical information workers. Think of it this way: If you have a large datacenter running many many many virtualization hosts and you would have to purchase a large volume of new virtualization hosts to respond to business requirements, and you`re using a FC SAN as storage. You would have to buy additional HBA ports for every single new host, increasing the cost additionally. If you had a SMB Scale-Out File Server Cluster, you would only need the HBA ports on these nodes, and could connect your virtualization hosts to the cluster using 10GBe.

You can easily set this up by using Failover Cluster Manager or Server Manager today, and point the locations for your VHD`s to be on this share in Hyper-V Manager, Failover Cluster Manager and Powershell.

But we will also need a solution on the management side. This is where Virtual Machine Manager comes handy.

A couple of important things to notice prior to adding your SMB share to your Hyper-V servers/clusters:

  • We recommend that you use a dedicated file server.
  • For SMB 3.0 files shares to work correctly with VMM, the file server must not be a Hyper-V host. This also applies to a highly available file server. Do not add the file server (stand-alone or cluster) as a managed host in VMM.
  • The file share must not be added as a VMM library share.
  • The VMM service account must have local administrative permissions on the file server where the SMB 3.0 share resides. You must assign these permissions outside of VMM.
  • If you used a domain account for the VMM service account, add the domain account to the local Administrators group on the file server.
  • If you used the local system account for the VMM service account, add the computer account for the VMM management server to the local Administrators group on the file server. For example, for a VMM management server that is named VMMServer01, add the computer account VMMServer01$.
  • Any host or host cluster that will access the SMB 3.0 file share must have been added to VMM by using a Run As account. VMM automatically uses this Run As account to access the SMB 3.0 file share.

Adding a SMB3.0 File Share to your Hyper-V Cluster

Navigate to your Fabric workspace in VMM and right click the Hyper-V Cluster you`d like to use a SMB3.0 File Share, and click properties.

Click ‘File Share Storage’, Add and type in the UNC path to your SMB share.

Once this is done, you should be able to specify the SMB share as the location for your VMs during creation.

You can also perform this task by using PowerShell:

Example:

$hostCluster = Get-SCVMHostCluster -Name "hvcluster.private.cloud"

Register-SCStorageFileShare -FileSharePath "\\smboslo\smb" -VMHostCluster $hostCluster


Mike Washam (@MWashamMS) detailed Windows Azure IaaS Overload in a 7/9/2012 post:

imageI took a short breather after TechEd North America and TechEd Europe back-to-back but I did want to put up a post to summarize the sessions around Windows Azure Virtual Machines and Virtual Networks from TechEd 2012. This is a big and extremely important launch for Windows Azure so we have quite a bit of coverage on the subject.

If you were looking for a crash course on Windows Azure IaaS here it is!


Meet the New Windows Azure – Scott Guthrie

Windows Azure Virtual Machines and Virtual Networks – Mark Russinovich

Windows Azure IaaS and How it Works – Corey Sanders

Extending Enterprise Networks to Windows Azure using Windows Azure Virtual Networks – Ganesh Srinivasan

Deep Dive on Windows Azure Virtual Machines – Vijay Rajagopalan

Running Linux on Windows Azure Virtual Machines – Tad Brockway

Migrating Applications to Windows Azure Virtual Machines – Michael Washam

Deploying SharePoint Farms on Windows Azure Virtual Machines – Paul Stubbs

Migrating SQL Server database applications to Windows Azure Virtual Machines – Guy Bowerman, Madhan Arumugam

Running Active Directory on Windows Azure Virtual Machine – Dean Wells

How to Move and Enhance Existing Apps for Windows Azure – Tom Fuller, Greg Varveris, Purush Vankireddy


<Return to section navigation list>

Cloud Security and Governance

Lori MacVittie (@lmacvittie) asserted “There’s the stuff you develop, and the stuff you don’t; both have to be secured” in an introduction to her Application Security is a Stack post to F5’s DevCentral blog of 7/11/2012:

l7stack

imageOn December 22, 1944 the German General von Lüttwitz sent an ultimatum to Gen. McAuliffe, whose forces (the Screaming Eagles, in case you were curious) were encircled in the city Bastogne. McAuliffe’s now famous reply was, “Nuts!” which so confounded the German general that it gave the 101st time to hold off the Germans; reinforcements arrived four days later.

This little historical tidbit illustrates perfectly the issue with language, and how it can confuse two different groups of people who interpret even a simple word like “nuts” in different ways. In the case of information security, such a difference can have as profound an impact as that of McAuliffe’s famous reply.

Application Security

image_thumbIt may have been noted by some that I am somewhat persnickety with respect to terminology. While of late this “word rage” has been focused (necessarily) on cloud and related topics, it is by no means constrained to that technology. In fact, when we look at application security we can see that the way in which groups in IT interpret the term “application” has an impact on how they view application security.

When developers hear the term “application security” they of course focus in on those pieces of an application over which they have control: the logic, the data, the “stuff” they developed. When operations hears the term “application security” they necessarily do (or should) view the term in a much broader sense. To operations the term “application” encompasses not only what developers tossed over the wall, but its environment and the protocols being used.

Operations must view application security in this light, because an increasing number of “application” layer attacks focus not on vulnerabilities that exist in code, but on manipulation of the protocol itself as well as the metadata that is inherently a part of the underlying protocol.

The result is that the “application layer” is really more of a stack than a singular entity, much in the same way the transport layer implies not just TCP, but UDP as well, and all that goes along with both. Layer 7 is comprised of both the code tossed over the wall by developers as well as the operational components and makes “application security” a much broader – and more difficult – term to interpret. Differences in interpretation are part of what causes a reluctance for dev and ops to cooperate. When operations starts talking about “application security” issues, developers hear what amounts to an attack on their coding skills and promptly ignores whatever ops has to say.

By acknowledging that application security is a stack and not a single entity enables both dev and ops to cooperate on application layer security without egregiously (and unintentionally) offending one another.

Cooperation and Collaboration

But more than that, recognizing that “application” is really a stack ensures that a growing vector of attack does not go ignored. Protocol and metadata manipulation attacks are a dangerous source of DDoS and other disruptive attacks that can interrupt business and have a real impact on the bottom line.

Developers do not necessarily have the means to address protocol or behavioral (and in many cases, metadata) based vulnerabilities. This is because application code is generally written within a framework that encapsulates (and abstracts away) the protocol stack and because an individual application instance does not have the visibility into client-side behavior necessary to recognize many protocol-based attacks. Metadata-based attacks (such as those that manipulate HTTP headers) are also difficult if not impossible for developers to recognize, and even if they could it is not efficient from both a cost and time perspective for them to address such attacks.

But some protocol behavior-based attacks may be addressed by either group. Limiting file-upload sizes, for example, can help to mitigate attacks such as slow HTTP POSTs, merely by limiting the amount of data that can be accepted through configuration-based constraints and reducing the overall impact of exploitation. But operations can’t do that unless it understands what the technical limitations – and needs – of the application are, which is something that generally only the developers are going to know. Similarly, if developers are aware of attack and mitigating solution constraints during the design and development phase of the application lifecycle, they may be able to work around it, or innovate other solutions that will ultimate make the application more secure.

The language we use to communicate can help or hinder collaboration – and not just between IT and the business. Terminology differences within IT – such as those that involve development and operations – can have an impact on how successfully security initiatives can be implemented.


<Return to section navigation list>

Cloud Computing Events

Mike Benkovich (@mbenko) described CloudTip #17-Build Connected Windows 8 Apps with Windows Azure in a 7/11/2012 post:

imageYesterday in Dallas we had Scott Guthrie (@ScottGu) and the Azure team put on a great event at the Irving Convention Center to show off what’s new in the Microsoft Cloud story and to dive into getting started with the tools and services that make it work. Chris Koenig did a great job of coordinating the event and Adam Hoffman, Clint Edmonson and Brian Prince all pitched in with sessions about Virtual Machines, Web Sites and how to work with the services.

imageMy talk was on Building Connected Windows 8 Metro applications with Windows Azure, and we showed how to use the Camera UI to upload images to Blob Storage, Geolocation to add a point of interest to a SQL Azure database and then add a pin to a Bing Map, and finally add Notification Services to update the Live Tile. It was a lot of code and I promised to share it here, so if you’re looking for the link to download it is http://aka.ms/dfwWin8Az.

imageHere are some notes to be able to build out & deploy locally and then migrate the services to Azure…

  • imageThis project is designed to run locally against the Azure Storage Emulator and SQL Express. It can easily be modified to run as a cloud service, see steps below.
  • Do a CTRL+SHIFT+F to search for "TODO" to find all the places where you need to personalize settings
  • I've included the script MsdnDB.sql which should be run against a local instance of SQL server, or against a cloud instance.
  • You should download the Bing Map VSIX installer to add functionality for Metro. Download the latest from Visual Studio Gallery here
    http://visualstudiogallery.msdn.microsoft.com/0c341dfb-4584-4738-949c-daf55b82df58
  • I used several packages to enable notifications. These included
    For MyApp --> PM> Install-Package Windows8.Notifications

For MySite --> PM> Install-Package WindowsAzure.Notifications

PM> Install-Package wnsrecipe

To deploy to the Cloud:

  1. Create an Azure Web Site from the Management console, then download the publish settings from the web site dashbaord
  2. Create a storage account and update the web.config of MySite with appropriate storage credentials
  3. Create a SQL Azure database
  4. Run the create script MsdnDB.SQL (included) against database
  5. Update credentials in web.config of MySite
  6. Change MyApp MainPage.xaml.cs URI's to point to your site instead of localhost:19480
  7. Run the NuGet Packages from Package Manager console
  8. Register your app for notifications on https://manage.dev.live.com/Build
  9. Update the Package Name reference in Package.appxmanifest
  10. Add the SID and Client secret to the SendNotification method in LocationController.cs

Jim O’Neil (@jimoneil) announced on 7/10/2012 his Big Data in Windows Azure–On-line Presentation to be presented on the Internet on 7/11/2012 at 1:00 PM PDT:

imageCheap storage and ‘unlimited’ scalability in the cloud make Windows Azure a perfect platform for “Big Data” processing. Furthermore, Microsoft’s partnership with Hortonworks to bring the Hadoop map-reduce framework and other parts of that ecosystem to Windows Azure is a key part of its strategy.

imageIn this session we’ll look specifically at the Hadoop on Azure offering, currently in limited preview, including its support for Hive and Pig and its integration with PowerPivot.

Requires Windows LiveMeeting.


Eric D. Boyd (@EricDBoyd) reported Radical Price Cut for CloudDevelop 2012 Conference in a 7/10/2012 post:

imageThanks to the amazing support from our generous sponsors, CloudDevelop 2012 ticket prices have been reduced from $55 to $15. It’s exciting to bring a full-day of education, training and networking focused on architecting, developing and managing applications in the Cloud for only $15. If you are currently building Cloud apps, just getting started learning about Cloud Computing, or you have only heard the buzzwords being thrown around, you should attend CloudDevelop 2012. Registration is open at http://clouddevelop2012attendee.eventbrite.com/ and ticket prices have been reduced to $15.

Sessions at CloudDevelop will include the following topics and technologies:

  • Windows Azure
  • AppHarbor
  • Amazon Web Services (AWS)
  • Cloud Foundry
  • Windows Live Services
  • imageHeroku
  • DevOps
  • Security and Legal Concerns
  • Hybrid Cloud
  • Node.js
  • and more

From speakers including:

And thanks again to the generous sponsors who have made this possible:

Don’t miss this great Midwest cloud computing conference by registering today at http://clouddevelop2012attendee.eventbrite.com/.


Himanshu Singh (@himanshuks) reported Windows Azure Challenge Winners Announced Today at Imagine Cup Finals in Sydney in a 7/10/2012 post:

imageThe world’s premier student technology competition, Microsoft’s Imagine Cup, kicked off its finals on, July 6th in Sydney Australia with more than 350 young technologists from 75 countries and regions. This year marked the event’s 10th year bringing together students from around the world to step up to the challenge of leading global change and attempt to solve some of the world’s most challenging issues – hunger, the environment, disease, infant mortality, energy, and more!

It was exciting to learn that just under half of this year’s student projects (45%) harnessed the power of the cloud and used Windows Azure in their technology prototypes!

Winners of all competitions were announced this morning via a press release, which can be found on the Imagine Cup virtual press kit, along with images, recent news and a feature story.

As for the Windows Azure Challenge, we are thrilled to announce and congratulate Team Virtual Dreams Azure from Brazil for taking home the win with their project, Eureka which enables teachers to turn lesson plans into interactive content for students’ phones, PCs and tablets.

The Team Virtual Dreams Azure is devised of just two brothers, Roberto and Eduardo Sonnino, and this isn’t their first time to the Imagine Cup, or the finals, for that matter. Between the two brothers, they have been to the Imagine Cup finals a collective, eight times!

They are the perfect entrepreneurial pair – Roberto loves coding while his brother gravitates more toward design and user interface. The two have dedicated most of their life to the Imagine Cup, and have learned invaluable skills and lessons, and clearly have formed a bright future. “We started when Roberto was in his last year of high school and I was in my first year. We’ve pretty much grown up with Imagine Cup in our lives,” Eduardo says. “We’ve learned a lot. We’ve learned to manage projects, we’ve learned to develop in short time constraints and deal with really short deadlines, we’ve worked with the newest technology that hasn’t been released yet, and we’ve managed bugs.”

The Sonninio brothers will receive $8,000 USD for their win! The second and third prize winners also receive prizes: Second place is $4,000 USD, and third place is $3,000 USD. Below is background on the Windows Azure Challenge runner ups! Congratulations to all!!!

Second Place:

  • Team Name: Team Complex
  • Country: Romania:
  • School: Universitatea Babes-Bolyai, Technical University of Cluj-Napoca
  • Project: Seedbit is a Web application stored in the cloud that unites individuals, companies and NGOs in a fun, social way to get them involved in social causes they are passionate about.

Third Place:

  • Team Name: Klein Team
  • Country: Algeria
  • School: The Higher School of Computer Science, Chlef University, IGEE Mohammed Bouguerra, Hassiba Ben Bouali - Chlef University
  • Project: DiaLife is a healthcare platform that helps make diabetic patients and their families’ lives easier and simpler by combining the elements of diabetes management into a software solution.

Eric D. Boyd (@EricDBoyd) announced What’s New in Windows Azure Tour Heading to WI in a 7/9/2012 post:

imageLast month, new and improved Windows Azure capabilities were announced and made available in a Windows Azure Preview. Following the announcement, many focused and deep technical Windows Azure sessions were presented at TechEd North America, TechEd Europe and I toured Chicago user groups presenting and overview of the new Windows Azure features. Tomorrow night, the “What’s New in Windows Azure” tour will cross the Illinois border and make it’s way into Wisconsin at the Wisconsin .NET Users Group.

If you’d like to know more about the new Windows Azure capabilities announced last month, register and come out tomorrow night to the Wisconsin .NET Users Group.

What’s New in Windows Azure

Windows Azure is continually innovating, getting new features, enhanced functionality, reduced pricing and just constantly getting better. Join Windows Azure MVP, Eric D. Boyd for an evening of getting to know what’s new in Windows Azure, walk through and explore the latest Windows Azure features, and get answers to your Cloud Computing and Windows Azure questions.

When
July 10th, 7pm

Where
SafeNet Consulting
10700 Research Drive
Wauwatosa, WI 53226

Register at http://www.wi-ineta.org.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Barb Darrow (@gigabarb) reported Cumulogic launches Java PaaS technology for service providers in a 7/11/2012 post to Giga Om’s

imageCumulogic, a company with strong Sun Microsystems and Java DNA, wants to make its technology the foundation of enterprise-class PaaSes to be offered by telcos, hosting companies and other service providers.

imageThe two-year old company started out building a managed public PaaS that it would sell to developers, but changed course, said Mike Soby [pictured at right], a CA veteran who joined the company as CEO in February. Now the idea is to be more an arms dealer to service providers that want to offer an enterprise-friendly Java PaaS to their customers.

“We want to be in the software business, not the service provider business.” On Wednesday, that PaaS infrastructure software, which has been in beta for some time, is generally available.

With company co-founders Laura Ventura, Rajesh Ramchandani both veterans of Sun — and with Java super-star developer James Gosling on the board of advisors, Cumulogic can boast strong Java cred. “You can’t have an enterprise PaaS without Java,” Soby told me in an interview. The third co-founder, Sandeep Patni was the application infrastructure lead for Goldman Sachs’ risk technology group. This is a group that gets Java and gets the enterprise.

In the field, Cumulogic’s foundational software will face off against Red Hat’s OpenShift — a PaaS with Java roots that will also attack the service provider market and VMware’s Cloud Foundry, a multi-language and multi-framework PaaS that VMware is pitching as a PaaS for all clouds — although Cloud Foundry does not, as yet, support J2EE applications. There are also other Java-focused PaaSes out there including CloudBees.

Cumulogic, based in Cupertino, Calif., addressed the knotty issue of multi-cloud support claiming it can manage applications on private and public clouds including CloudStack, OpenStack, Eucalyptus, VMware and Amazon is talking with most of the major cloud and infrastructure as a service (IaaS) providers about the product. It says it has already signed a few, including Contegix, a cloud service provider, is already aboard.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

Full disclosure: I’m a registered GigaOm analyst.


Kai Zhao described a New AWS Feature - MFA-Protected API Access in a 7/10/2012 post:

Introduction
imageIn 2009, we introduced AWS Multi-Factor Authentication (MFA), a security feature that requires users to prove physical possession of an MFA device by providing a valid MFA code in addition to their username and password when signing in to AWS websites.

Last November, we introduced AWS virtual MFA, which enabled MFA functionality on your smartphone, tablet, or computer running any application that supports the open OATH TOTP standard.

Today, we're announcing MFA-protected API access, which extends AWS MFA protection to AWS service APIs. You can now enforce MFA authentication for AWS service APIs via AWS Identity and Access Management (IAM) policies. This provides an extra layer of security over powerful operations that you designate, such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3.

How do I Get Started with MFA-Protected API Access?
You can get started in two simple steps:

  1. Assign an MFA device to your IAM users. You can get AWS Virtual MFA for no additional cost, which enables you to use any OATH TOTP-compatible application on your smartphone, tablet, or computer. Alternatively, you can purchase a hardware MFA key fob from Gemalto, a third-party provider.
  2. Add an MFA-authentication requirement to an IAM access policy. You can attach these access policies to IAM users, IAM groups, or resources that support Access Control Lists (ACLs): Amazon S3 buckets, SQS queues, and SNS topics.

How do I write an IAM policy to require MFA?
The policy below is a basic example of how to enforce MFA authentication on users attempting to call the Amazon EC2 API. Specifically, it grants access only if the user authenticated with MFA within the last 300 seconds.

{
"Statement":[{
"Action":["ec2:*"],
"Effect":"Allow",
"Resource":["*"],
"Condition":{
"NumericLessThan":{"aws:MultiFactorAuthAge":"300"}
}
}
]
}

This policy utilizes a new condition key, aws:MultiFactorAuthAge, whose value indicates the number of seconds since MFA authentication. If the condition “matches”, i.e. the value of aws:MultiFactorAuthAge is less than 300 seconds at the time of the API call, then access is granted.

For more information on writing such policies (including Deny examples, which can be slightly more tricky, or how to set policies that check for MFA authentication irrespective of freshness), see the IAM documentation.

How MFA-Protected API Access Works
MFA-protected API access simply requires users to enter a valid MFA code before using certain functions designated by account administrators. The diagram below details how the process works in the programmatic use case. Because the AWS Management Console calls AWS service APIs, you can enforce MFA on APIs regardless of access path, either from programmatic API calls or via the console user interface.

Step 1: MFA-protected API access utilizes temporary security credentials, which can be used just like long-term access keys to sign requests to AWS APIs. The process to request temporary security credentials is largely unchanged, except the user enters an MFA code into your application that requests temporary security credentials on behalf of the user.

Step 2: If the MFA authentication succeeds, the application receives temporary security credentials that include MFA-authenticated status.

Step 3: The application calls APIs on behalf of the user using the temporary security credentials acquired in Step 2. As part of the authorization process, AWS will validate the credentials and the MFA authentication.

Price = No Additional Cost
MFA-protected API access is a feature of IAM available at no extra cost. Only pay for other AWS services that you or your IAM users use.

Once again, you can Visit the IAM documentation to learn more. We’re always interested in hearing about your use case, so please let us know what you think!

Kai Zhao
Product Manager – AWS Identity and Access Management


Jeff Barr (@jeffbarr) reported AWS Elastic Beanstalk - Two Additional Regions Supported in a 7/10/2012 post:

imageWe've brought AWS Elastic Beanstalk to both of the US West regions, bringing the total to five:

  • US East (Northern Virginia)
  • Asia Pacific (Tokyo)
  • EU (Ireland)
  • US West (Oregon)
  • US West (Northern California)

I have recently spent some time creating and uploading some PHP applications to Elastic Beanstalk using Git and the new 'eb' command. The process is very efficient and straightforward. I edit and test my code locally (which, for me, means an EC2 instance), commit it to my Git repository, and then push it (using the command git aws.push) to my Elastic Beanstalk environment. I can focus on my code while Elastic Beanstalk handles all of the deployment and management tasks including capacity provisioning, load balancing, auto-scaling, and health monitoring. I wrote an entire blog post on Git-based deployment to Elastic Beanstalk.

imageIn addition to running PHP applications on Linux using the Apache HTTP server, Elastic Beanstalk also supports Java applications running on the Apache Tomcat stack on Linux and .NET applications running on IIS 7.5. Each environment is supported by the appropriate AWS SDK (PHP, Java, or .NET).

You can get started with Elastic Beanstalk at no charge by taking advantage of the AWS Free Usage Tier.


<Return to section navigation list>

0 comments: