Wednesday, October 12, 2011

Windows Azure and Cloud Computing Posts for 10/12/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table and Queue Services

My (@rogerjenn) Ted Kummert at PASS Summit: Hadoop-based Services for Windows Azure CTP to Release by End of 2011 post started as follows:

imageTed Kummert announced on 10/12/2011 in his PASS Summit 2011 keynote a partnership with Hortonworks to port Apache Hadoop to SQL Azure by the end of 2011. From the Microsoft Expands Data Platform With SQL Server 2012, New Investments for Managing Any Data, Any Size, Anywhere press release of the same date:

Microsoft is committed to helping customers manage any data, any size, anywhere with the SQL Server data platform, Windows Server and Windows Azure. Hortonworks has a rich history in leading the design and development of Apache Hadoop. Their experience and expertise in this space helps us accelerate our delivery of our Hadoop based distribution on Windows Server and Windows Azure while maintaining compatibility and interoperability with the broader ecosystem.

• Updated 10/12/2011 1:10 PM PDT with a post by Gianugo Rabellino (@gianugo) to the Port 25 blog. See end of post.

Ted posted Microsoft Expands Data Platform to Help Customers Manage the ‘New Currency of the Cloud’ at 9:00 AM:

imageThis morning, I gave a keynote at the PASS Summit 2011 here in Seattle, a gathering of about 4,000 IT professionals and developers worldwide. I talked about Microsoft’s roadmap for helping customers manage and analyze any data, of any size, anywhere -- on premises, and in the private or public cloud.

Microsoft makes this possible through SQL Server 2012 and through new investments to help customers manage ‘big data’, including an Apache Hadoop-based distribution for Windows Server and Windows Azure and a strategic partnership with Hortonworks. Our announcements today highlight how we enable our customers to take advantage of the cloud to better manage the ‘currency’ of their data.

We often talk about the economics of the cloud, detailing how customers can achieve unmatched economies of scale by taking advantage of public or private cloud architectures. As an example, an enterprise with a small incubation project could theoretically take it to production overnight, thanks to the elasticity and scalability benefits of the cloud.

As we turn more and more to the cloud, data becomes its currency. The exchange of data is the heart of all cloud transactions, and, as in a real-world economy, more value is created whenever data is generated or consumed. But there are new business challenges that this currency creates: How do we deal with the scope and scale of the data we manage? How do we deal with the diversity of types and sources of data? How do we most efficiently process and gain insight from datasets ranging from megabytes to petabytes?

How do we bring the world’s data to bear on the tasks of the enterprise, as businesses ask themselves questions like: “What can data from social media sites tell me about the sentiment of my brands and products?” And, how do we enable all end-users to gain the critical business insights they need – no matter where they are and what device they are using? Customers need a data platform that fully embraces the cloud, the diversity and scale of data both inside and outside of their ‘firewall’ and gives all end-users a way to translate data into insights – wherever they are.

Microsoft has a rich, decades-long legacy in helping customers get more value from their data. Beginning with OLAP Services in SQL Server 7, and extending to SQL Server 2012 features that span beyond relational data, we have a solid foundation for customers to take advantage of today. The new addition of an Apache Hadoop-based distribution for Windows Azure and Windows Server is the next building block, seamlessly connecting all data sizes and types. Coupled with our new investments in mobile business intelligence, and the expansion of our data ecosystem, we are advancing data management in a whole new way. …

Read more.


Avkash Chauhan (@avkashchauhan) summarized Microsoft and Hadoop Adoption: The Big announcement about Big Data in a 10/12/2011 post:

imageToday Microsoft Announced great plans about Hadoop adoption to deliver enterprise class Apache Hadoop based distributions on both Windows Server and Windows Azure. See Announcement.

Microsoft will provide simplified download, installation and configuration experience of several Hadoop related technologies i.e.

imageThe Hadoop based service for Windows Azure will allow any developer or user to submit and run standard Hadoop jobs directly on the Azure cloud with a simple user experience. The central ides is that you should be able to take any standard Hadoop job and deploy it on our platform, without considering what platform you have developing your Hadoop jobs.

What is for developers:

  • Enable integration with Microsoft developer tools
  • Investment in making Javascript a first class language for Big Data.
  • Making it possible by writing high performance Map/Reduce jobs using Javascript.

What is for end users:

  • Hadoop-based applications targeting the Windows Server and Windows Azure platforms
  • The applications will easily work with Microsoft’s existing BI tools like PowerPivot and Power View
  • Providing an ODBC Driver and an Add-in for Excel, each of which will interoperate with Apache Hive, to enable self-service analysis on business information

Released today: Two new connectors to enable customers, work effectively with both structured and unstructured data.

  • SQL Server connector for Apache Hadoop lets customers move large volumes of data between Hadoop and SQL Server 2008 R2
  • SQL Server PDW connector for Apache Hadoop moves data between Hadoop and SQL Server Parallel Data Warehouse (PDW).

Visit:


Dhananjay Kumar (@debug_mode) explained Inserting Null Value for Integer columns in Windows Azure Table in a 10/12/2011 post:

imageI recommend to read below post before you start reading this post: Creating Azure Table using CloudTableClient.CreateTableIfNotExist.

There may be requirement when you need to insert null values for values type columns in Azure table. In our recent project, we had a requirement to insert null values for integer columns. Usually if you don’t provide any value to Integer variable, it would get initialized as 0, since they are value type’s variables. This post will focus on providing null values for integer columns

imageVery first you need to define and entity as below:

image

You may have noticed that I am making integer properties as Nullable types

image

Now create Azure table as below:

image

Then you can insert null value for integer columns as below:

image

For your reference full source code to insert null values is as below:

using System;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

namespace ConsoleApplication7
{
class Program
{
static void Main(string[] args)
{
CloudStorageAccount account = CloudStorageAccount.Parse("connectionString");
CloudTableClient tableClient = new CloudTableClient(
account.TableEndpoint.ToString(),
account.Credentials);
string tableName = "School";
tableClient.CreateTableIfNotExist(tableName);
TableServiceContext  context = tableClient.GetDataServiceContext();
SchoolEntity entity = new SchoolEntity
{
Age = null,
Name = "DJ",
PartitionKey = "S",
RollNumber  = null ,
RowKey = Guid.NewGuid().ToString()
};

context.AddObject(tableName, entity);
context.SaveChanges();

Console.WriteLine("Saved");
Console.ReadKey(true);

}
}

public class SchoolEntity : TableServiceEntity
{
public string Name { get; set; }
public int ? RollNumber { get; set; }
public int ? Age { get; set; }
}
}

In this way you can insert null values.


Dhananjay Kumar (@debug_mode) described Creating Azure Table using CloudTableClient.CreateTableIfNotExist in a 10/12/2011 post:

imageI discussed Three simple steps to create Azure table in my last post. Just after submitting the post, I got a comment from Neil Mackenzie that I can avoid creation of context class [Step 2] in last post. He helped me with the link for the best practice on MSDN

This post is to accommodate suggestion given by him and recommendation given on MSDN.

imageIn this post we will see how to create Azure table using CloudTableClient.CreatTableIfNotExist method.

Add References

If you are creating table from a console application then you need to add below references to your project,

System.Data.Service.Client

clip_image001

And Microsoft.WindowsAzure.Serviceruntime.dll

image

Create Entity class

To represent entity class, we need to create a class inheriting from class TableServiceEntity. SchoolEntity class will be as below,

clip_image001[6]

If you notice in above code snippet there is no

  1. Row Key
  2. Partition Key
  3. Calling base class constructor [ As I discussed here ]

Now when you make object of SchoolEntity class, in intellisense you will get RowKey,PartitionKey and TimeStamp to set as below,

clip_image001[8]

Creating Account object

image

Pass connection string as of your storage account.

Creating Table Client

image

Creating Table

image

Creating Table Context

image

Creating Object to Insert

image

Adding Object to table

image

Putting all the code together,

using System;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

namespace ConsoleApplication7
{
class Program
{
static void Main(string[] args)
{
CloudStorageAccount account = CloudStorageAccount.Parse("connectionString");
CloudTableClient tableClient = new CloudTableClient(
account.TableEndpoint.ToString(),
account.Credentials);
string tableName = "School";
tableClient.CreateTableIfNotExist(tableName);

TableServiceContext  context = tableClient.GetDataServiceContext();

SchoolEntity entity = new SchoolEntity
{
Age = 9,
Name = "DJ",
PartitionKey = "S",
RollNumber  = null ,
RowKey = Guid.NewGuid().ToString()
};
context.AddObject(tableName, entity);
context.SaveChanges();

Console.WriteLine("Saved");
Console.ReadKey(true);

}
}

public class SchoolEntity : TableServiceEntity
{
public string Name { get; set; }
public int ? RollNumber { get; set; }
public int Age { get; set; }
}
}

In this way you can create a table.


<Return to section navigation list>

SQL Azure Database and Reporting

David Linthicum (@DavidLinthicum) asserted “Data complexity and performance needs are the two reasons enterprises will find solace in cloud-borne database technology” in a deck for his Why 2013 will be the year of the cloud database article of 10/12/2011 for InfoWorld’s Cloud Computing blog:

imageThose businesses that store data in the cloud typically use primitive file storage systems rather than databases. However, these days many cloud computing platforms are adding or enhancing their database offerings, thus becoming more compelling to enterprises.

imageFor example, Google now offers a relational database for its cloud-hosted App Engine application development and a hosting platform called Google Cloud SQL -- in short, MySQL in the cloud. Both Oracle and Microsoft have their own cloud-based database offerings. Amazon Web Services offers Relational Database Service (RDS), as well as other popular databases in its IaaS product.

imageOf course, a fraction of enterprise data already exists in public clouds, so why will 2013 be the year of migration to cloud-hosted database systems? There are two core reasons for the migration and a simpler reason for the 2013 timeframe.

The first and most critical reason for the migration is that data in most enterprises is a huge mess. For years, databases have been built around applications or some tactical need, creating hundreds or even thousands of data silos that are difficult to integrate or to even provide a common view of business information. The advantage of migrating some data to cloud-based databases is around the cost and ease of doing so. Thus, you can spin up terabytes of operational data stores without having huge software and hardware costs appearing on budgets and causing concern in the CFO's office or, worse, the boardroom.

The second reason is performance. Databases in enterprises often don't provide data in a timely manner to support those running the business. Queries that should take 10 or 15 minutes instead take hours. Moreover, in many instances, the data is erroneous. Cloud-based databases, if engineered correctly, typically provide much better performance than traditional on-premise systems. This is due to the fact that they can gather up as many processor instances as required to complete the database processing quickly.

No, cloud-delivered databases won't save you. Indeed, they could complicate matters if you're not careful. However, the ease of provisioning, cost advantage, and better performance mean that they are the best value when it comes to database processing.

Given all the positives, do I believe the migration timeframe is 2013? As in all things, there is a ramp-up period. Now that commercial options are available from several vendors, we'll start getting toes-in-the-water efforts in 2012, then see the big wave of implementations in 2013.

Based on the new and upgraded SQL Azure features I expect to coming in 2011, I’d say 2012 is likely to be the “year of the cloud database.”


Lynn Langit (@llangit) announced on 10/12/2011 that she’s Leaving Microsoft:

This will be last blog post to this blog, as I am leaving Microsoft effective Monday, October 17.

Post-Microsoft I’ll be contracting (data projects, both production and education) and spending more time growing ‘Teaching Kids Programming’ around the world. Two years ago someone dear to me gave me the book below, little did I know then how true it would prove…

Tomorrow I go to Africa, to speak at TechEd Africa on my last day of work for Microsoft. Wish me luck. Follow my new adventures at http://www.lynnlangit.com.

Here’s wishing you good luck and a safe journey to Africa and beyond.


Paras Doshi continued his series with Part 4 of “Getting started with SQL Azure” is live! on 10/11/2011:

image[The a]im of “Getting started with SQL Azure” series is to offer you a set of brief articles that could act as a Launchpad for your to-be wonderful journey of exploring Microsoft’s cloud based database solution i.e. SQL Azure.

In this blog post, I have discussed the SQL Azure architecture. Link:

http://beyondrelational.com/blogs/parasdoshi/archive/2011/10/10/getting-started-with-sql-azure-part-4-lt-lt-paras-doshi.aspx

Just to recap:

In part 3, I have discussed:

  1. Provisioning model of SQL Azure
  2. Billing Model of SQL Azure

Read more: Getting started with SQL Azure – Part 3 << Paras Doshi

In part 2, I have discussed:

  1. How to sign up for a Free trial of windows azure (to play with SQL Azure)!
  2. How to create your very first SQL Azure database (and a table too!)
  3. How to connect to SQL Azure server Via SQL server Management studio.

Read more: Getting started with SQL Azure – Part 2 << Paras Doshi

And in part 1, I have discussed:

  1. Where SQL Azure fits in the windows azure platform?
  2. Defined SQL Azure
  3. Advantages of SQL Azure

Read More: Getting started with SQL Azure – Part 1 << Paras Doshi

Related articles


<Return to section navigation list>

MarketPlace DataMarket and OData

My (@rogerjenn) Ted Kummert at PASS Summit: “Data Explorer” Creates Mashups from Big Data, DataMarket and Excel Sources post of 10/12/2011 begins:

imageMicrosoft Corporate Vice President Ted Kummert described a new business intelligence (BI) tool codenamed “Data Explorer” for integrating data in on-premises SQL Server, cloud-based SQL Azure, Marketplace DataMarket and overlaying it on Excel worksheets. From Microsoft’s press release of 10/12/2011:

Today Kummert also demonstrated Microsoft code-name “Data Explorer,” a prototype that provides a way for customers to easily discover, enrich and share data to gain competitive advantage in today’s business climate. When combined with Windows Azure MarketPlace, now available in 26 worldwide markets, Data Explorer will help customers realize their data’s full potential. Customers are encouraged to begin testing and to provide feedback when CTPs are made available in the SQL Azure Labs later this year at http://www.SQLAzureLabs.com. [emphasis added] …

The video is a bit more explicit on the CTP’s release date:

image

This new experience provides a new way to organize, manage, mashup and gain new insights from the data that you care about. Microsoft Codename "Data Explorer" provides capabilities for data curation, collaboration, classification and mashup, opening new capabilities and opportunities around the data that you own or want to work with.

imageMerge mashup data or publish to an Excel worksheet, PowerPoint slide, Mashup, or connect to an OData feed:

image_thumb[32]

Read the entire post.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Alan Smith (@alansmith) announced the Official release of “The Developer’s Guide to AppFabric" in a 10/12/2011 post:

imageI’ve just published the official release version of “The Developer’s Guide to AppFabric”. After receiving some great feedback from people I’ve corrected some errors and typos. There’s no new content since the CTP, but I’m busy working on that and there will be a lot more coverage of brokered messaging in the November release. If you want notification of new releases, follow me on Twitter @alansmith.

image72232222222You can download the October 2011 release here.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Lori MacVittie (@lmacvittie) asserted “Application delivery infrastructure can be a valuable partner in architecting solutions …” in an introduction to her Infrastructure Architecture: Whitelisting with JSON and API Keys post of 10/12/2011 to F5’s DevCentral blog:

imageAJAX and JSON have changed the way in which we architect applications, especially with respect to their ascendancy to rule the realm of integration, i.e. the API. Policies are generally focused on the URI, which has effectively become the exposed interface to any given application function. It’s REST-ful, it’s service-oriented, and it works well.

Because we’ve taken to leveraging the URI as a basic building block, as the entry-point into an application, it affords the opportunity to optimize architectures and make more efficient the use of compute power available for processing. This is an increasingly important point, as capacity has become a focal point around which cost and efficiency is measured. By offloading functions to other systems when possible, we are able to increase the useful processing capacity of an given application instance and ensure a higher ratio of valuable processing to resources is achieved.

The ability of application delivery infrastructure to intercept, inspect, and manipulate the exchange of data between client and server should not be underestimated. A full-proxy based infrastructure component can provide valuable services to the application architect that can enhance the performance and reliability of applications while abstracting functionality in a way that alleviates the need to modify applications to support new initiatives.

AN EXAMPLE

Consider, for example, a business requirement specifying that only certain authorized partners (in the integration sense) are allowed to retrieve certain dynamic content via an exposed application API. There are myriad ways in which such a requirement could be implemented, including requiring authentication and subsequent tokens to authorize access – likely the most common means of providing such access management in conjunction with an API. Most of these options require several steps, however, and interaction directly with the application to examine credentials and determine authorization to requested resources. This consumes valuable compute that could otherwise be used to serve requests.

An alternative approach would be to provide authorized consumers with a more standards-based method of access that includes, in the request, the very means by which authorization can be determined. Taking a lesson from the credit card industry, for example, an algorithm can be used to determine the validity of a particular customer ID or authorization token. An API key, if you will, that is not stored in a database (and thus requires a lookup) but rather is algorithmic and therefore able to be verified as valid without needing a specific lookup at run-time. Assuming such a token or API key were embedded in the URI, the application delivery service can then extract the key, verify its authenticity using an algorithm, and subsequently allow or deny access based on the result.

This architecture is based on the premise that the application delivery service is capable of responding with the appropriate JSON in the event that the API key is determined to be invalid. Such a service must therefore be network-side scripting capable. Assuming such a platform exists, one can easily implement this architecture and enjoy the improved capacity and resulting performance boost from the offload of imageauthorization and access management functions to the infrastructure.

1. A request is received by the application delivery service.

2. The application delivery service extracts the API key from the URI and determines validity.

3. If the API key is not legitimate, a JSON-encoded response is returned.

4. If the API key is valid, the request is passed on to the appropriate web/application server for processing.

Such an approach can also be used to enable or disable functionality within an application, including live-streams. Assume a site that serves up streaming content, but only to authorized (registered) users. When requests for that content arrive, the application delivery service can dynamically determine, using an embedded key or some portion of the URI, whether to serve up the content or not. If it deems the request invalid, it can return a JSON response that effectively “turns off” the streaming content, thereby eliminating the ability of non-registered (or non-paying) customers to access live content.

Such an approach could also be useful in the event of a service failure; if content is not available, the application delivery service can easily turn off and/or respond to the request, providing feedback to the user that is valuable in reducing their frustration with AJAX-enabled sites that too often simply “stop working” without any kind of feedback or message to the end user.

The application delivery service could, of course, perform other actions based on the in/validity of the request, such as directing the request be fulfilled by a service generating older or non-dynamic streaming content, using its ability to perform application level routing.

The possibilities are quite extensive and implementation depends entirely on goals and requirements to be met.

Such features become more appealing when they are, through their capabilities, able to intelligently make use of resources in various locations. Cloud-hosted services may be more or less desirable for use in an application, and thus leveraging application delivery services to either enable or reduce the traffic sent to such services may be financially and operationally beneficial.

ARCHITECTURE is KEY

The core principle to remember here is that ultimately infrastructure architecture plays (or can and should play) a vital role in designing and deploying applications today. With the increasing interest and use of cloud computing and APIs, it is rapidly becoming necessary to leverage resources and services external to the application as a means to rapidly deploy new functionality and support for new features. The abstraction offered by application delivery services provides an effective, cross-site and cross-application means of enabling what were once application-only services within the infrastructure. This abstraction and service-oriented approach reduces the burden on the application as well as its developers.

The application delivery service is almost always the first service in the oft-times lengthy chain of services required to respond to a client’s request. Leveraging its capabilities to inspect and manipulate as well as route and respond to those requests allows architects to formulate new strategies and ways to provide their own services, as well as leveraging existing and integrated resources for maximum efficiency, with minimal effort.


Jerry Huang posted Introducing Gladinet Cloud Server in a 10/11/2011 post to his Gladinet blog:

cloudafs_whiteGladinet Cloud Server is Gladinet CloudAFS combined with Gladinet Cloud for Teams. On one hand, it is a file server that allows you to mount different cloud storage services such as Amazon S3, Windows Azure, Google Storage, EMC Atmos into a Windows File Server. On the other hand, it allows you to have default Gladinet Cloud so it works out of the box (instead of waiting for you to mount any cloud storage services before it is fully functional). This is a step forward to have a complete cloud access solution for customers. Gladinet Cloud is based on Amazon S3. By delivering it as default storage to customers, Gladinet makes it simpler for users to use the cloud.

Installation

Gladinet Cloud Server installs through a standard Windows MSI package.

image

Next Step you will connect to Gladinet Cloud.

image

Management Console

After you connect to your Gladinet Cloud account, you will see the contents from your Gladinet Cloud in the Storage Access Manager.

image

Attach Cloud Storage to File Server

Like before, you can attach cloud storage service, such as Amazon S3 to the Cloud Server.

image

Publish Shares

After the cloud storage service is mounted, you can publish network shares out of it.

image

Map Drive to the Attached Cloud Storage Folder

From a client machine, you can map a network drive to the Cloud Server.

image

Video Demo

Here is a 9 min YouTube Video for the new Gladinet Cloud Server

Related Posts

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Beth Massi (@bethmassi) reported availability of a new How Do I: Create Custom Search Screens in LightSwitch? (Beth Massi) video in a 10/12/2011 post to the Visual Studio LightSwitch blog:

imageCheck it out! We just released a new How Do I video on the LightSwitch Developer Center. I created this one based on suggestions from you so I hope you like it!

How Do I: Create Custom Search Screens in LightSwitch?

image222422222222In this video you will learn a couple techniques you can use when creating custom search screens. You will see how to create custom queries to provide specific search criteria with limited choices, search within a range of dates, as well as provide a filter by related parent records.


Kalyan Bandarupalli posted OData and Windows Azure to the Tech Bubbles blog on 10/11/2011:

image222422222222This post discusses about building a service using CLOUD platform that can reach various devices. What is OData and Where it fits in? OData is a specification that makes very easy to exchange and interact with data on the web. So OData is all about connecting up devices to the CLOUD. This post also discuss how to create a OData Service in Visual Studio 2010 and host it on Windows Azure then explains how to consume the service on Windows phone Mango.

What Is OData?

A REST based set of patterns for accessing information via services

It is a great protocol for connecting devices to the CLOUD. The REST API’s which you might have developed having the following common requirements

  • Querying the data
  • Ordering the data
  • Paging the data
  • Filtering the data
  • Even CRUD operations on data

OData provides a common way to do the above operations. If you got your web API and if you use OData then you have got a wide range of options to expose of your data to various client libraries and platforms.

image

1. Create a new Windows Azure Project in Visual Studio 2010 as shown below

image

2. Create a Data Access layer to access the data from SQL Azure as shown below

image

Your Workout Class looks as below

image

Your data access layer is ready to interact with the database.

The connection string in web.config looks as below, you can local sql database when your developing the application and then can change the connection string to point it to SQL Azure.

image

3. Now Create OData Service- To do this we can use WCF DataService which is Microsoft implementation of OData protocol in .NET. Add a new item to project and select WCF Data Service from dialogue box.

image

Your OData Service Class code looks as below

image

Now you have a OData Service that hooked up to your data model. You can mention what parts of your data that you want to expose in InitializeService method as below

image

You are going to give read permission on your data and also setting the page size to your return data. Now We have got a working OData Service.

Run your service and you can see the below output

image

as it supports multiple formats , if you want to view JSON viewer then you can install JSON viewer from here

image

Support Authentication for OData Service

We allow the authenticated users to insert new workouts in the above service. You need to write some interceptors to achieve this

image

The ChangeInterceptor invokes every time when you insert or update the data on service. If the user is not authenticated then returning the error message back otherwise we are claiming the credentials. Here it is using ACS with Oauth.

The web.config entry for Oauth protection module looks as below. you can download Oauth protection module from here.

image

If request comes with Oauth token then this module communicate with ACS and validates the user.

Following Web.Config entries tells where to look at the ACS host and other service url stuff etc

image

You get the above config values from Windows Azure management portal

image

Build a client to access the above created OData Service as below, we are consuming this windows phone application. Add a service reference to this project

image

image

It simply instantiates the DataServiceCollection and hookup the async handler and then formulate the query. The query is basically what you want to ask the OData Service. Context object in above code is instance generated by the service reference.

When you call LoadAsync then it calls http against that URI. Now run your client application which brings the data from service and shows in windows phone emulator as below

image

The OData protocol http://Odata.org was created to provide a simple, common way to interact with the data on the Web from any platform or device.


Lokesh Kumar explained how to Define data in [a] LightSwitch Application in a 10/10/2011 post to the C# Corner blog:

Prologue:
image222422222222VisualStudio LightSwitch is a new development tool designed specially for rapid application development (RAD) techniques in line of business application development. Data centric application can be easily developed by simply designing data structures and related UI.

Defining the data in lightSwitch:

We can easily define the data in VisualStudio LightSwitch. We just have to define the DataStructures and defining the relationships among the tables.

Step 1: Create a new LightSwitch application in Visual Studio 2010.

LightSwitch application

Step 2: Click on create new table.

create table in LightSwitch

Step 3: Change the table name to company and add the attribute name and their corresponding types.

create table in LightSwitch

Step 4: By clicking on data sources add another table and change its name to employee add the attributes and their datatypes.

LightSwitch application

Step 4: By clicking on data sources add another table and change its name to joining add the attributes and their datatypes.

LightSwitch table

Step 5: Define the relationship Between employees and joining as many to one by click on relationship tab.

relation ship in LightSwitch

Step 6: By clicking on data sources add another table and change its name to salarydetails add the attributes and their datatypes.
LightSwitch tables

Step 7: Define the relationship Between employees and salarydetails as many to one by click on relationship tab.

LightSwitch relationship

Summary
This is the procedure how to define the data in VisualStudio LightSwitch.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Ryan Bateman (@ryanbateman) reported in his Cloud Provider Global Performance Ranking – August post to the CloudSleuth blog that Windows Azure from the Chicago data center was again #1 in performance:

imageHere we are again with our monthly results gathered via the Global Provider View, ranking the response times seen from various backbone locations around the world. With the recent additions (in late August) of backbone nodes in Singapore on Singtel and in Toyko on KDDI we intend to watch the potential effects on the global averages closely.

image


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

Martin Tantow (@mtantow) listed The Top 5 Cloud Security Companies to Watch in a 10/12/2011 post to the CloudTimes blog:

imageAll companies planning to migrate to the cloud have serious security issues to consider. This is the reason why a lot of startup companies are out in the market to address this problem and to take advantage of this huge market base.

imageCheck out five of the most recent companies offering their best cloud security features.

CloudPassage

Cloud Passage has its headquarters in San Francisco and was founded by CEO Carson Sweet, who came from Agentrics, Executive Chairman Talli Somekh from Musea Ventures and Vice President of Engineering, Vitaliy Geraymovych from a technology consulting firm.

The key features of their product include Halo SVM and Firewall SaaS, and the best part of their service is that it is free.

Their Selling Point: CloudPassage had their eyes on the reliable security feature that will protect the virtual server platform, which they believe is the lifeblood of the virtual network. CEO Sweet said, “People sometimes sort of forget about the virtual network around those machines that need to be secured.”

CipherCloud

CipherCloud’s headquarters is located at Cupertino, California. Founder and CEO is Pravin Kothari; he was formerly the founder, CTO and interim-CEO for Agillance. He also held the vice president of engineering and co-founder position for ArcSight.

They use tokenization and cloud data encryption for their security features, which they offer on a pay-per-service subscription at $20 per month.

Their Selling Point: CipherCloud focuses on encryption and tokenization as their primary security feature, which will both protect data on storage and data in migration. Kothari said that the security threat lies in the hands of the unsecured enterprise customers and not their cloud providers. He said, “Our gateway is designed as a stateless solution. This, along with our high-performing encryption algorithms, ensures near-zero impact on the performance.”

HyTrust

HyTrust’s headquarters is located at Mountainview, California. The following key people are responsible for the company startup: John De Santis is CEO, who is the former vice president of Cloud Services at VMWare, Eric Chiu and Renata Budko are co-founders and act as president and vice president of marketing, Hemma Prafullchandra is CTO, who is also the former CTO of FuGen Solutions.

HyTrust offers their newest version 2.2, which supplies a centralized access and control for cloud servers that provide reliable security and compliance. The service per host supported is set at $1,000.

Their Selling Point: They have been chosen last year as the “2010 Company to watch” gives them the edge over their competitors. Their security product stood out against established industry forerunners conducted by Trend Micro that tested various virtualization security management tools.

Bromium

Bromium is a company that is still on the stealth mode and is currently developing their security features. Their headquarters are located in Cupertino, California and Cambridge, U.K. It is headed by their founders Gaurav Banga, former CTO of Phoenix Technologies; Simon Crosby, former Citrix CTO of Data Center and Cloud Division and Ian Pratt, current chairman of Xen.org and another Citrix expert.

Since the product is not out yet there is no pricing available, but a lot of hint are being speculated that Bromium’s product will use virtualization to secure all endpoints of the cloud enterprise.

Their Selling Point: Crosby remains to be one of the most outspoken about the public cloud. He says that security breaches are not the result of cloud loopholes, rather they come from the unprotected cloud enterprise customer base, which they are hoping to address.

High Cloud Security

The company’s headquarters is located also at Mountain View, California. High Cloud Security’s co-founders include Bill Hackenberger, who is also CEO and president and Steve Pate. Hackenberger, was a key person in startups like AIM Technology; he also served as vice present of engineering for Caymas Systems and Vormetric. Pate, on the other hand, served as CTO of Vormetric and developed virtualization at HyTrust.

Their product has not released any pricing yet, but says that their Virtual Machine encryption answers will provide a vital security platform.

Their Selling Point: Their product marketing strategy took a different route by releasing an excerpt from an article that pointed out the three quick steps on how to steal and hijack important enterprise data. They emphasized that all machines are running virtual at that time, so they came up with a solution that the only way to prevent hijackers’ access to the database is to encrypt all sensitive information at the storage layer.


<Return to section navigation list>

Cloud Computing Events

Brian Loesgen (@BrianLoesgen) announced on 10/12/2011 a Windows Azure DevCamp in Mountainview Oct 28-29:

clip_image001

  • Date Oct. 28-29, 2011
  • Time 8:00AM - 6:00PM
  • Location Silicon Valley Center, 1065 La Avenida St Building 1, Mt. View, CA 94043
  • REGISTER NOW >>
  • Events run from 8:00 AM - 6:00 PM

Come join us for 2 days of cloud computing!
Developer Camps (DevCamps for short) are free, fun, no-fluff events for developers, by developers. You learn from experts in a low-key, interactive way and then get hands-on time to apply what you've learned.

What am I going to learn at the Windows Azure
Developer Camp?

At the Azure DevCamps, you'll learn what's new in developing cloud solutions using Windows Azure. Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers. Windows Azure provides an operating system and a set of developer services used to build cloud-based solutions. The Azure DevCamp is a great place to get started with Windows Azure development or to learn what's new with the latest Windows Azure features.

Come for one day, come for both. Either way, you'll learn a ton. Here's what we'll cover:

Day 1

  • Getting Started with Windows Azure
  • Using Windows Azure Storage
  • Understanding SQL Azure
  • Securing, Connecting, and Scaling Windows Azure Solutions
  • Windows Azure Application Scenarios
  • Launching Your Windows Azure App

Day 2

On Day 2, you'll have the opportunity to get hands on developing with Windows Azure. If you're new to Windows Azure, we have step-by-step labs that you can go through to get started right away. If you're already familiar with Windows Azure, you'll have the option to do build an application using the new Windows Azure features and show it off to the other attendees for the chance to win prizes. Either way, Windows Azure experts will be on hand to help.

clip_image004Featured Presenter: James Conard
James Conard is the Senior Director of the Windows Azure Evangelism team at Microsoft. His team is responsible for helping developers build applications in Windows Azure by providing key development resources including toolkits, samples, and training kits and by engaging with the community through conferences, training events and Developer Camps.

We'll also see presentations from: Bruno Terkaly , Wade Wegner, Roger Doherty, Nick Harris and Bruno Nowak.clip_image002

clip_image007


Six Windows Azure Luminaries will be speakers in the Seattle Interactive Conference’s Conference’s Cloud Experience track on 11/2/2011:

imageSIC is developing a world-class speaker roster comprised of online technology’s most successful and respected personalities, alongside earlier-stage entrepreneurs who are establishing themselves as the leaders of tomorrow. SIC isn’t just about telling a story, it’s about truly sharing a story in ways that provide all attendees with a thought provoking experience and actionable lessons from the front lines.

Register today using the promo code “azure 200” and attend SIC for only $150 (a $200 savings).

image

imageThe cloud is fundamentally changing the way we share and access information and media. Experts predict that before the end of the decade, cloud connected devices will eclipse desktop computing. And, the business opportunity is huge – among mobile apps alone, the market is expected to increase from $7B in 2009, to $17.5B by 2012.

The Cloud Experience track at SIC will prepare you for what’s ahead so you can capture the business opportunity today.

Technical Content, Technical Experts

This Cloud Experience track is for experienced developers and who want to learn how to leverage the cloud for mobile, social and web app scenarios. No matter what platform or technology you choose to develop for, these sessions will provide you with a deeper understanding of cloud architecture, back end services and business models so you can scale for user demand and grow your business.

  • Day 1 of the Cloud Experience track provides a full day of technical content and case studies for mobile, social and web scenarios. Sessions are open to all registered attendees of SIC. View session and speaker lineup.
  • On Day 2, individuals and teams will compete to win hot prizes and a slice of developer fandom. Learn more about the Mobile App Challenge.
  • Get assistance and advice for developing, deploying and managing mobile, social and web apps in the cloud. Speakers and technical experts will be available in the Windows Azure Lounge during both conference days.
Cloud Experience Topics & Speakers

Cloud Experience track speakers represent a wide variety of experiences and expertise. View the current list of speakers.
Below is a list of planned Cloud Experience session topics. Check back in late September for more information, including session descriptions, speaker assignments, time slots and Cloud Experience keynote speakers.
MOBILE
Zero to Hero: Windows + Cloud
Zero to Hero: Android + Cloud
Zero to Hero: iOS + Cloud
Mobile + Cloud: Services, Scenarios and Storage
Starting Small and Growing with the Cloud: Architecting for Cost and User Demand

SOCIAL
Building Highly Scalable Social Campaigns in the Cloud

WEB
Building Highly Scalable Web Applications in the Cloud

Compete in the Mobile App Challenge

On Day 2 of the conference, individuals and teams will compete to win hot prizes and a slice of developer fandom. Judges will select participants from the Windows Azure Lounge to present their solutions. Presenters will have 15 minutes to pitch their mobile + cloud solutions on stage to the panel and audience. Presenters may pitch and/or demo their idea for creating a mobile application utilizing Windows Azure Services in the backend. More info to come on winner categories and Mobile App Challenge judges panel.

Register today using the promo code “azure 200” and attend SIC for only $150 (a $200 savings).


PaddyS reported Cumulux-Microsoft Windows Azure Tour of the US Midwest in a 10/11/2011 post to the Cumulux blog. From Microsoft Events’ description:

  • Meal: Yes
  • Language(s): English.
  • Product(s): cloud services.
  • Audience(s): IT Decision Maker.

Is your organization looking to reduce IT costs, increase resources on enabling the business and improving application development agility? Join us to find out how other businesses are harnessing the power of Cloud Computing to innovate new business models, reach new customers and how you can start moving production workloads to the cloud.

 

Location Delivery Date Address
Bloomington October 25, 2011 2203 E. Empire St., Suite J ,Bloomington, IL 61704
Cincinnati October 26, 2011 4605 Duke Drive, Suite 800 ,Mason, OH 45040
Columbus October 27, 2011 8800 Lyra Dr., Suite 400,Columbus, OH 43240
Indianapolis October 27, 2011 Parkwood Business Park, Building 4, Indianapolis, IN 46240
Cleveland October 28, 2011 Park Center III, Independence, OH 44131
Nashville November 14, 2011 2555 Meridian Blvd, Suite 300,Franklin, TN 37067
Memphis November 15, 2011 6465 N. Quail Hollow Rd., Suite 200,Memphis, TN 38120
St Louis November 16, 2011 3 City Place Dr., Suite 1100,St. Louis, MO 63141
Kansas City November 17, 2011 10801 Mastin Blvd., Suite 620,Overland Park, KS 66210
Minneapolis November 3, 2011 8300 Norman Center Dr., Suite 950Bloomington, MN 55437
Southfield November 4, 2011 1000 Town Center Dr., Suite 1930Southfield, MI 48075
Omaha November 8, 2011 One Valmont Plaza, Suite 201Omaha NE 68154
Des Moines November 9, 2011 4601 Westtown Parkway, Suite 136West Des Moines, IA 50266
Milwaukee December 6, 2011 N19 W24133 Riverwood Dr., Suite 150 Waukesha, WI 53188

imageNo significant articles today.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Matthew Weinberger (@M_Wein) reported Google [Describes] App Engine Premier, Storage for Cloud ISVs in a 10/12/2011 post to the TalkinCloud blog:

imageGoogle is keeping its cloud infrastructure momentum going with a gaggle of announcements designed to increase its appeal to cloud ISVs, systems integrators and larger enterprises. That includes the launch of Google App Engine Premier accounts, the full release of Google Cloud Storage (formerly Google Storage for Developers) and the Google Prediction API leaving beta.

Each individual announcement is actually fairly simple, so it’s no wonder that Google rolled it all into one big official blog entry. Here’s the breakdown, in convenient bullet point form:

  • imageGoogle App Engine Premier Accounts give developers and integrators priority support, a 99.95 percent uptime SLA, and the ability to deploy unlimited apps on Google’s PaaS solution, all for $500 a month.
  • Google Storage for Developers has left Google Labs with the new name of Google Cloud Storage, and with that change comes with a new file read/write API and detailed storage analytics.
  • Additionally, Google Cloud Storage is lowering bandwidth and storage charges, with volume discounts for larger customers and the complete elimination of data ingress fees, bringing it in line with Microsoft Windows Azure and Amazon Web Services. According to Google, depending on their plan customers may see a monthly bill decrease of up to 40 percent.
  • The Google Prediction API has left Google Code Labs, letting developers build smarter apps that can make extremely educated guesses as to the future of data sets. With the full release, the Google Prediction API gets PMML v4.01 support and data anomaly detection.

imageThe blog entry also contains a shout-out to Google Cloud SQL, the search giant’s new cloud database service for App Engine applications. I said it before, and I’ve said it again: Google is fast-tracking its enterprise and developer plays with these cloud infrastructure enhancements, and TalkinCloud will be keeping it under the microscope to see how it pays off for Google Apps Authorized Resellers and ISVs.

Read More About This Topic


Jeff Barr (@jeffbarr) announced the capability to Launch EC2 Spot Instances in a Virtual Private Cloud in a 10/11/2011 post:

imageOver the past two months, we have had the opportunity to share several exciting developments regarding Spot Instances. We have told you how to Run Amazon Elastic MapReduce on EC2 Spot Instances, we published Four New Amazon EC2 Spot Instance Videos, and we outlined the excitement around Scientific Computing with EC2 Spot Instances. Others in the community have also shared their experiences with Spot Instances. You may have read about Cycle Computing running a 30,000 core molecular modeling workload on Spot for $1279/hour or Harvard Medical School moving some of their workload to Spot after a day of engineering, saving roughly 50% in cost.

imageIn typical Amazon fashion, we like to keep the momentum going. We've combined two popular AWS features, Amazon EC2 Spot Instances and the Amazon Virtual Private Cloud (Amazon VPC). You can now create a private, isolated section of the AWS cloud and make requests for Spot Instances to be launched within it. With this new feature, you get the flexibility and cost benefits of Spot Instances along with the control and advanced security options of the VPC.

Based on feedback from customers in the community, we believe this feature will be ideal for use cases like scientific computing, financial services, media encoding, and "big data." As an example, we have received a number of requests from members the scientific community who have been mining petabytes of confidential (e.g. human genome or sensitive customer data) and/or proprietary data (e.g. patentable). Traditionally they have set up their own software VPN connection and launch Spot Instances. Now they can leverage all of the security and flexibility benefits associated with VPC.
We have also heard a number of customers looking for ways to integrate on-premise and cloud solutions, and to burst into the cloud. These customers now can leverage VPC and Spot for a great low cost solution to this "computation gap."
Launching Spot instances into an Amazon VPC is similar to launching Spot instances, but you need to specify the VPC you would like to run your Spot Instances within. Launching Spot Instances into Amazon VPC requires special capacity behind the scenes, which means that the Spot Price for Spot Instances in an Amazon VPC will float independently from those launched outside of Amazon VPC.

The AWS Management Console includes complete support for this new feature combo. You can examine the spot price history for EC2 instances launched within a VPC (this screen shot was taken before the release; we added some high bids to make the chart look more interesting):

You can use the console's Request Instances Wizard to make a request to launch Spot Instances in any of your VPCs at the maximum price of your choice (just be sure to choose VPC for the Launch Into option):

For more information on using Spot Instances in VPC, please visit the EC2 User's Guide.

If you want to learn more about the ins and outs of Spot Instances, I recommend that you spend a few minutes watching the following videos:

  • Getting Started With Spot Instances
  • Deciding on Your Spot Bidding Strategy
  • How to Manage Spot Instance Interruption

There is also a video coming soon on the how to launch a Spot Instance within VPC, so check back at the Spot Instance web page again soon. I will tweet when it becomes available (please follow me (@jeffbarr) for more details).

As I mentioned, the Spot service has been rapidly evolving, and we would love to get your feedback on the next features you’d love to see. Please feel free to email spot-instance-feedback@amazon.com if you have more feedback. Alternatively, to learn more about Spot, please visit the Spot Instance page for more details.

Jeff forgot to add links to his videos list.


Dave Smith asked Can Box.net Challenge Amazon and Microsoft in the Cloud? in a 10/11/2011 post to International Business Times’ Tech blog:

imageBox.net, a cloud computing services provider based in Palo Alto, Calif., raised $81 million in a Series D funding round led by Bessemer Venture Partners, NEA, prior investors Draper Fisher Jurvetson Growth and Andreesen Horowitz, and strategic investors SAP Ventures and Salesforce.com. With the latest round, Box.net has raised $162 million in funding to date, reportedly earning a valuation above $600 million.

imageBox.net raised $81 million in funding, and now hopes to build infrastructure to compete with the likes of Microsoft, Oracle and IBM.

Box.net will reportedly utilize its new funds to develop new products, expand internationally, and build more data centers here in the U.S. in order to compete with larger competitors. But the most exciting project happening at Box.net is the Box Innovation Network, which seeks to provide funding, consulting, and other resources to developers who want to build applications on the Box platform.

"Box is helping organizations make better decisions faster by bringing new innovation to business information," said Jai Das, managing director, SAP Ventures. "As an investor and partner, we're excited about how Box is reinventing content management and collaboration, and look forward to working with them to make customers more productive."

Box.net also announced a new platform called the Box Innovation Network (/bin), which is designed to create an ecosystem for enterprise and mobile applications.

"We need to provide an amazing experience for the 100,000 businesses already using Box, including 77% of the Fortune 500, while growing our global user base at an unprecedented pace," said Box.net CEO and co-founder Aaron Levie in a blog post. "We need to invest aggressively in scaling our team and infrastructure - two things that will always require significant capital, when done correctly."

Box.net, which boasts 7 million users and stores over 300 million documents, is a platform for collaboration, social, and mobile cloud computing. The company was founded in 2005 by Levie and Dylan Smith, who both designed the service while attending college across the country from one another--Smith attended Duke University in Durham, North Carolina, while Levie attended school at the University of Southern California in Los Angeles. The project, which sought to provide cheap online data storage to the masses, initially garnered attention from HDNet chairman and Dallas Mavericks owner Mark Cuban, who provided the fledgling start-up with angel capital.

Five years of steady growth and four rounds of investor funding later, Box.net has evolved into more than just cloud storage; the program is a comprehensive application for communication, remote syncing, and collaboration. Currently, 100,000 businesses utilize the Box.net platform, with 250,000 new users signing up each month.

Box.net may be symbolic of a greater trend towards the cloud, but the real reason behind this company's success streak is because of its dedication to quality. Both Levie and Smith have been said to have incredible work ethic and attention to detail, and the two co-founders have been busy constantly tweaking and adding valuable functionalities to make a terrific product better.

"There's so much change taking place in the enterprise, and we're trying to build out go-to platform for how people use data, work, and collaborate," Levie said. "We're redefining how enterprises share and manage content on Box, while also building a powerful, open ecosystem of partners and developers to help our customers get more value and flexibility from their information than ever before possible."

In the same company blog post, Levie called out the "old guard" in cloud computing, specifically Microsoft, Oracle, and IBM.

"But for what the big players may lack in innovation or focus, they make up for in muscle," Levie wrote. "Microsoft notoriously crushes competition on the third try. Oracle refused to give up on the applications market, and is now moving to the cloud with a strong position. IBM has customer reach and brand credibility that enable it to serve the Fortune 500 better than anyone else."

In an interview in March, Levie also pointed out that Amazon's Web-based storage offering called Cloud Drive lacked a key element: the ability to share content between accounts.

"The power of having your data in the cloud, as opposed to stored locally or on a remote disk, is that you can instantly and easily share with people you trust," wrote Levie in a company blog post. "Amazon will need to create a compelling experience around sharing your media or content frictionlessly."

Box.net currently integrates with 120 applications, including other cloud solutions like Salesforce, Google Apps, Netsuite, and SAP Streamwork. The company may not be a "big player" just yet, but if Box can attract more developers and applications, its accessibility may be the key to competing with the likes of Microsoft and Amazon.

Users can sign up for a free Box.net personal account, which provides 5 GB of Web storage with simple sharing and mobile app access; Amazon's Cloud Drive offers the same 5 GB of storage space with an annual free for additional space, while Box.net rival Dropbox only offers 2 GB for free.

Box.net also has two other offerings, including a business plan that costs $15 per user per month, and a scalable and customizable enterprise plan for large companies with multiple offices.

$81 billion would build you about 20% of a cloud-computing data center in the class of those recently deployed by Microsoft.


<Return to section navigation list>

0 comments: