Wednesday, April 07, 2010

Windows Azure and Cloud Computing Posts for 4/7/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table, Queue and Drive Services

Bruce Kyle explains How to Use Windows Azure Drive in this 4/7/2010 post:

Applications running in the Windows Azure cloud can use existing NTFS APIs to access a network attached durable drive.  The durable drive is actually a Page Blob formatted as a single volume NTFS Virtual Hard Drive (VHD).

A post by Brad Calder, Windows Azure Drive Demo at MIX 2010 on the Windows Azure Storage Team Blog, shows how you create a virtual drive in Windows 7. You can then upload the drive for access by your cloud application. Your application specifies the storage account name and its secret key in the ServiceConfiguration.cscfg in the same way as it does on your development computer.

Then you can access the drive from code in your application by:

  1. Initializing the drive cache so all processes and threads running under that instance can mount and manipulate drives.
  2. Creating an object that refers to the drive.
  3. Mounting the drive.
  4. Using the drive in your applications to read from or write to a drive letter (e.g., X:\) that represents a durable NTFS volume for storing and accessing data.

You can use the Windows Azure Drive APIs in your Windows Azure application to:

  • Create Drive - Creates a Page Blob formatted as a single partition NTFS volume VHD.
  • Initialize Cache – Allows an application to specify the location and size of the local data cache for all Windows Azure Drives mounted for that VM instance.
  • Mount Drive – Takes a formatted Page Blob and mounts it to a drive letter for the Windows Azure application to start using.
  • Get Mounted Drives – Returns the list of mounted drives. It consists of a list of the drive letter and Page Blob URLs for each mounted drive.
  • Unmount Drive – Unmounts the drive and frees up the drive letter.
  • Snapshot Drive – Allows the client application to create a backup of the drive (Page Blob).
  • Copy Drive – Provides the ability to copy a drive or snapshot to another drive (Page Blob) name to be used as a read/writable drive.

For a a short 10 minute demo of the using the drives, see MIX 2010 talk on Windows Azure Storage. The demo on using Windows Azure Drives starts at 19:20.

Bruce continues with links to more Windows Drive resources.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

James Hamilton’s Stonebraker on CAP Theorem and Databases post of 4/7/2010 continues his analysis of the NoSQL movement’s premises about the Consistency, Availability and Partitioning (CAP) theorem and eventual consistency:

Mike Stonebraker published an excellent blog posting yesterday at the CACM site: Errors in Database Systems, Eventual Consistency, and the CAP Theorem. In this article, Mike challenges the application of Eric Brewer’s CAP Theorem by the NoSQL database community. Many of the high-scale NoSQL system implementers have argued that the CAP theorem forces them to go with an eventual consistent model.

Mike challenges this assertion pointing that some common database errors are not avoided by eventual consistency and CAP really doesn’t apply in these cases. If you have an application error, administrative error, or database implementation bug that losses data, then it is simply gone unless you have an offline copy. This, by the way, is why I’m a big fan of deferred delete. This is a technique where deleted items are marked as deleted but not garbage collected until some days or preferably weeks later. Deferred delete is not full protection but it has save[d] my butt more than once and I’m a believer. See On Designing and Deploying Internet-Scale Services for more detail.

CAP and the application of eventual consistency doesn’t directly protect us against application or database implementation errors. And, in the case of a large scale disaster where the cluster is lost entirely, again, neither eventual consistency nor CAP offer a solution. Mike also notes that network partitions are fairly rare. I could quibble a bit on this one. Network partitions should be rare but net gear continues to cause more issues than it should. Networking configuration errors, black holes, dropped packets, and brownouts, remain a popular discussion point in post mortems industry-wide. I see this improving over the next 5 years but we have a long way to go. In Networking: the Last Bastion of Mainframe Computing, I argue that net gear is still operating on the mainframe business model: large, vertically integrated and expensive equipment, deployed in pairs. When it comes to redundancy at scale, 2 is a poor choice.

Mike’s article questions whether eventual consistency is really the right answer for these workloads. I made some similar points in “I love eventual consistency but…” In that posting, I argued that many applications are much easier to implement with full consistency and full consistency can be practically implemented at high scale. In fact, Amazon SimpleDB recently announced support for full consistency. Apps needed full consistency are now easier to write and, where only eventual consistency is needed, its available as well.

Don’t throw full consistency out too early. For many applications, it is both affordable and helps reduce application implementation errors.

Mike Kelly shares his notes on David Robinson’s SQL Azure FireStarter session in a Windows Azure SQL Notes post of 4/6/2010, which begins:

  • Goal is to convince you that there is no difference between SQL Server and SQL Azure.
  • First called "SQL Data Services for Azure" at PDC 2008.
    • Looked a lot like the Azure table storage that Brad just talked about.
    • Feedback - why not expose SQL Server?
  • Done that - extended the SQL Server platform to the cloud.
    • Rich ecosystem of tools: BI, reporting, VS integration, …
    • Goal -> same SQL Server you have on premise you now have in the cloud.
  • Difference between hosted DB and DB as a service
    • SQL Azure combines best of both.
    • Nothing to install, nothing to maintain, nothing to patch.
    • Every 8 weeks provide new features as a service update
      • Service Update 2 deployment started today. Expect it to complete by Friday (4 days).
    • Use existing tools - Management Studio, Visual Studio - all just work
      • Need November 2009 CTP of SQL Server 2008 R2 to manage Azure SQL Services
    • Automation manages servers in the data centers. …

Mike promises “slides and sample code are to be posted later and I will update the post with them when they are.”

Sajid Qayyum explains Referencing SQL Azure Database[s] in this MSDN blog post of 4/6/2010:

Irrespective of where your application lies i.e. in the cloud or locally, you can simply connect to the SQL Azure database by replacing your local DB Connection String with the SQL Azure Connection String. The connection string for any SQL Azure database can be obtained in the "Server Administration" screen in your SQL Azure account by selecting the database and clicking the Connection String button under the Databases tab.

Rituraj’s Troubleshooting SQL Azure Connectivity post of 4/6/2010 shows you:

How to resolve some of the common connectivity error messages that you would see while connecting to SQL Azure:

  • A transport-level error has occurred when receiving results from the server. (Provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)
  • System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated.
  • An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections
  • Error: Microsoft SQL Native Client: Unable to complete login process due to delay in opening server connection.
  • A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

Andrew J. Brust reviews OData for Visual Studio Magazine in this 4/1/2010 Open Data, Open Microsoft post, which begins:

I've always been a data guy. I think data maintenance, sharing and analysis is the inspiration for almost all line-of-business software, and technology that makes any or all of it easier is key to platform success. That's why I've been interested in WCF Data Services (previously ADO.NET Data Services) since it first appeared as the technology code-named "Astoria." Astoria was based on a crisp idea: representing data in AtomPub and JSON formats, as REST Web services, with simple URI and HTTP verb conventions for querying and updating the data.

Astoria, by any name, has been very popular, and for good reason: It provides refreshingly simple access to data, using modern, well-established Web standards. Astoria provides a versatile abstraction layer over data access, but does so without the over-engineering or tight environmental coupling to which most data-access technologies fall prey. This elegance has enabled Microsoft to do something equally unusual: separate Astoria's protocol from its implementation and publish that protocol as an open standard. We learned that Microsoft did this at its Professional Developers Conference (PDC) this past November in Los Angeles, when Redmond officially unveiled the technology as Open Data Protocol (OData). This may have been one of Microsoft's smartest data-access plays, ever. …

Peter Kellner’s OData Query Option top Forces Data To Be Sorted By Primary Key post of 3/31/2010 reports that:

I’ve recently started using Microsoft’s WCF Data Services which supports OData Services.  What this means is that we can access resources by simply specifying a URI.  This concept greatly simplified building an ORM layer on a web site, as well as creating the linkage between the server side data and the client side application, which in my case is usually a browser.

So, the issue this blog addresses is that if you form a URI with the parameter $top={anything}, your data will automatically be sorted.  The documentation for OData on top basically says that, but it could be clearer.  It says the following:

“If the data service URI contains a $top query option, but does not contain a $orderby option, then the Entries in the set needs to first be fully ordered by the data service.”

What actually happens is when you use the orderby clause, the data will be sorted 100% of the time for you, whether you do it or not.

I put a small example together that shows that.  I’ll briefly step through the parts of the code attached to this post that show that happening.  First, the results:

image

Here is the actual Visual Studio 2010 RC project you can run yourself:

Solution: WCFServicesWithODataExample.zip

Sean Cleaver (@seancleaver) announced in his Data Sync Consuming OData Feeds post of 3/25/2010:

We[']re writing a new Data Provider for Data Sync that can consume an OData Feed for example below were pulling in the Speakers data from the recent MIX event using the published OData feed http://api.visitmix.com/OData.svc/.

Sean appears to be describing Simego’s Data Synchronisation Studio.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

T-10 Media claims “the Service Bus can be used to build hybrid apps which span both on-premise and cloud services” in a The Hybrid Cloud and Azure [Blog] post of 4/7/2010:

There’s been a lot of buzz about the ‘hybrid’ cloud – the blending of on-premise services with cloud based services. CloudKick recently launched  CloudKick  Hybrid  a tool for monitoring cloud and on-premise servers from a single console (see story here), Nimsoft which has a similar monitoring tool was recently acquired by CA for $350m, hosting provider VoxTel  recently announced a unified admin/monitoring tools for its cloud and server offerings.

There is an undoubted need for a hybrid architecture for many larger corporations since migrating existing apps to the cloud is not a simple as lot of demos show and there is a perception (whether real or not) that the data is less secure on the cloud. Enter hybrids apps – maintain the data on premise or consume on-premise apps from a cloud service.

Of course it is possible to communicate between on-premise data sources or apps and cloud-based apps using SOAP/REST communication protocols, however there are two major obstacles – discovering the service endpoints (since these may change due to dynamically assigned IPs)  and navigating through firewalls. These problems can be overcome by allowing apps to selectively open ports which is inherently insecure, and using relay systems that sit between the firewall and the apps and act as a bridge, thee systems tend to be very complicated and hard to implement.

The Azure Service Bus attempts to solve this issue by proving a service which allows applications which need to communicate with eachother to register with it. The requesting app is given a Service Bus endpoint to communicate with the data source/service app. Essentially the services are provided by service apps run behind the firewall, and the connection endpoints are provided by the Azure Service Bus. It should be noted that the Service Bus allows communication with non-.NET services , so Linux/UNIX hosted apps can register with the Service  Bus and be consumed by .NET apps.

Security is provided by the Azure AppFabric Access Control, which applies user-defined rules to ensure security when an app claims tokens via the STS service provided by the Access Control.

Thus the Service Bus can be used to build hybrid apps which span both on-premise and cloud services.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Softpedia reports .NET Framework 4 RTW in Windows Azure by Mid-July 2010, According to Microsoft in this 4/7/2010 post:

Microsoft is working not only on the imminent release of .NET Framework 4, but also on expanding support beyond the now-traditional Windows client and server operating system. In this regard, the Redmond giant is hard at work delivering .NET 4 support for its Cloud platform. The promise from the software giant is that customers leveraging Windows Azure will be able to start taking advantage of .NET 4 for their applications in mid-2010.

“As we announced at MIX 2010, .NET 4 will be available in Windows Azure within 90 days of .NET 4 RTM,” a member of the Windows Azure team stated. Microsoft’s next-generation development tools and platform are scheduled for release the coming week. Visual Studio 2010, .NET Framework 4 and Silverlight 4 will all be officially launched on April 12 in a Las Vegas event.

This places availability of .NET Framework 4 RTW (release to web) support for Windows Azure sometime by mid-July 2010. The software giant could, of course, beat its own deadline, but, so far, it has chosen to give itself a little elbow room in order to get .NET4 support on Windows Azure to become a reality.

Fact is that Windows Azure already features .NET Framework 4, but not the RTW milestone. “As part of our preparation for that, the latest operating system build available in Windows Azure contains the .NET 4 RC. Although you cannot use this build to run .NET 4 applications, please let us know if having .NET 4 RC in the build has any effect on your existing applications. One known effect you may see if you’re consuming generic ASP.NET performance counters is that they will report data only on .NET 4 applications. You can instead use the versioned performance counters as documented in KB article 2022138. As always, you can choose which build of the operating system your application will run on in Windows Azure,” the Windows Azure team member stated.

The Cloud platform version that Microsoft is referring to is Windows Azure Guest OS 1.2 (Release 201003-01). The Release went live earlier this week, more precisely on April 5th, 2010, and contains .NET Framework 4.0 RC support. However, as the company stated above, the Windows Azure development environment does not support the .NET 4.0 Framework at this point in time. The purpose of Windows Azure Guest OS 1.2 (Release 201003-01) is that customers test to see whether their applications and services will continue to run under normal parameters while using .NET Framework 4.0 libraries. …

Here’s the Windows Azure Team’s official Upcoming Support in Windows Azure for .NET Framework 4 post of 4/6/2010:

As we announced at MIX 2010, .NET 4 will be available in Windows Azure within 90 days of .NET 4 RTM. As part of our preparation for that, the latest operating system build available in Windows Azure contains the .NET 4 RC. Although you cannot use this build to run .NET 4 applications, please let us know if having .NET 4 RC in the build has any effect on your existing applications.

One known effect you may see if you’re consuming generic ASP.NET performance counters is that they will report data only on .NET 4 applications. You can instead use the versioned performance counters as documented in KB article 2022138.

As always, you can choose which build of the operating system your application will run on in Windows Azure. See the MSDN documentation for details.

EarthTimes reports “Miami 311, a government transparency solution built with MapDotNet UX, won first place in the Microsoft Windows US Public Sector Azure Contest” in this MapDotNet Application Wins First Place in Microsoft Windows Azure Contest post of 4/7/2010:

ISC is pleased to announce that Miami 311, a government transparency solution built with MapDotNet UX, has won first place in the Microsoft Windows US Public Sector Azure Contest. The application was selected from a field of thirteen entries by Internet voters and a panel within Microsoft.

Miami 311 is an online application supported by the Microsoft Azure cloud platform that allows Miami residents to report, monitor and analyze all non-emergency events that occur in the metropolitan area. If a citizen needs to report a non-emergency situation such as a pothole, street light outage or missed trash pickup, he must first call 311 to report the issue. Once the issue is reported, the citizen can login to the Miami 311 online tool to track the progress of the issue reported and also view the progress of other city-wide projects in the area. The intent of the site is to increase citizen access to city-wide information.

"MapDotNet UX at version 8.0 is 100% managed .net code, which makes the entire server product and its functionality is deployable to Azure, which runs in a 64-bit virtualized environment. This enables map tile rendering (using a high-speed WPF-based map renderer), spatial querying and spatial data editing all in Azure. This is much more than pushpins in the cloud," said Brian Hearn, MapDotNet Lead Architect. …

Steve Marx’s annoy.smarx.com: Letting the Internet Choose My Wallpaper post of 4/6/2010 describes his demo at the Windows Azure FireStarter event of the same date:

For a demo I gave today at the Windows Azure Firestarter event, I let anyone on the internet change my wallpaper. You too can set my wallpaper by pointing your browser to http://annoy.smarx.com. I’ll try to continue running on my laptop for the next few days, so any time my laptop’s on and online, you can set my wallpaper. (If my laptop’s not on, you can still see the web page, but you’ll get a strange XML message when you try to change my wallpaper.)

image

You can even get the code and let people set your wallpaper too (if you have Windows Azure and Service Bus accounts). The entire project took less than eight hours to develop and deploy.

(All videos, slides, and code from the event will be available on Mithun Dhar’s blog in the next few days.) …

Download the code here: http://cdn.blog.smarx.com/files/AnnoySmarx_code.zip, and you too can crowdsource your wallpaper selection!

Be sure to set all the right configuration settings (in ServiceConfiguration.cscfg as well as in the local listener’s app.config) to point to your own storage, CDN endpoint, and service bus namespace. Then just deploy the app and launch the local listener on your desktop/laptop.

Diego Cardenas of Brazil’s Go Airlines reports in this 00:02:51 video:

Diego Cardenas, a Solutions Architect at Go Airlines in Brazil, say[s] that they chose Windows Azure to use Virtual Machine for PS access, thus not having additional costs to maintain services on premise. Go Airlines is also very excited about new Data Sync feature of Windows Azure.

Return to section navigation list> 

Windows Azure Infrastructure

Benjamin Day reports on 4/7/2010 In the news: Got quoted in an article about Windows Azure cost estimation by Rob Barry of SearchWinDevelopment.com:

I got interviewed a quoted for an article on Windows Azure cost estimation.  One of the key points to remember is that, if your code is deployed to Windows Azure, you’re still getting billed even if it isn’t running.

Click here to read the story.

image

Katy Ring “claims CIOs of Large Enterprises Likely to Become ‘Vendor Management Officers’” in a preface to the Cloud Network’s Cloud Computing Can Help Pull SMBs Out of Recession, According To New Research post of 4/7/2010:

In K2 Advisory's report"Cloud Computing: A Step Change for IT Services," which analyses the developing market for cloud services, the report's author Dr Katy Ring, Director, K2Advisory says that the benefits of Cloud Computing can provide the business flexibility to help companies operate more effectively in the current economic climate. However, the report finds that adoption rates by smaller organisations of public cloud and SaaS services from vendors such as Amazon and Google will outpace the adoption rate of enterprises by a factor of two. By 2015 for organisations below 1,000 employees, a third to half of IT spend is likely to be with public cloud providers.

Commenting on the findings, Dr Ring said, "In five years' time the provision of IT to mid-sized and smaller businesses (of less than 1000 employees) will be quite distinct in terms of cloud adoption from enterprises. Indeed, it could be argued that small and mid-sized business use of cloud computing will enhance their agility and their ability to bounce back more quickly from the recession of 2009/10. Many Western enterprises, however, will continue to find that their IT systems are increasingly sclerotic, constrained by client-server ERP systems." …

K2 Advisory’s report states that the biggest challenges for enterprise adoption of cloud computing lie with existing investment in legacy systems, and with the potential impact on the internal IT department. Ultimately CIOs suspect that the rise of cloud computing heralds the demise of retaining internal technological expertise. IT services will be delivered by external suppliers who will be managed with (yet to be) established procurement processes. As an increasing amount of an IT group’s effort is spent on external providers delivering systems integration and managed services, this can be seen as evidence that the traditional enterprise IT we’re familiar with is disappearing. In this world, a CIO is a vendor management officer, and most of the technology is taken care of by external suppliers.

K2 Advisory is part of Sift Media, which runs the annual Business Cloud Summit in London. This year's event will be held on November 30th 2010. For more details on the Summit go to www.businesscloud9.com.

Darrell West’s Saving Money Through Cloud Computing research paper from the Brookings Institution’s The Economic Gains of Cloud Computing event of 4/7/2010. From the Executive Summary:

clip_image001The U.S. federal government spends nearly $76 billion each year on information technology, and $20 billion of that is devoted to hardware, software, and file servers (Alford and Morton, 2009). Traditionally, computing services have been delivered through desktops or laptops operated by proprietary software. But new advances in cloud computing have made it possible for public and private sector agencies alike to access software, services, and data storage through remote file servers. With the number of federal data centers having skyrocketed from 493 to 1,200 over the past decade (Federal Communications Commission, 2010), it is time to more seriously consider whether money can be saved through greater reliance on cloud computing.

Cloud computing refers to services, applications, and data storage delivered online through powerful file servers. As pointed out by Jeffrey Rayport and Andrew Heyward (2009), cloud computing has the potential to produce “an explosion in creativity, diversity, and democratization predicated on creating ubiquitous access to high-powered computing resources.” By freeing users from being tied to desktop computers and specific geographic locations, clouds revolutionize the manner in which people, businesses, and governments may undertake basic computational and communication tasks (Benioff, 2009). In addition, clouds enable organizations to scale up or down to the level of needed service so that people can optimize their needed capacity. Fifty-eight percent of private sector information technology executives anticipate that “cloud computing will cause a radical shift in IT and 47 percent say they’re already using it or actively researching it” (Forrest, 2009, p. 5).

To evaluate the possible cost savings a federal agency might expect from migrating to the cloud, in this study I review past studies, undertake case studies of government agencies that have made the move, and discuss the future of cloud computing. I found that the agencies generally saw between 25 and 50 percent savings in moving to the cloud. For the federal government as a whole, this translates into billions in cost savings, depending on the scope of the transition. Many factors go into such assessments, such as the nature of the migration, a reliance on public versus private clouds, the need for privacy and security, the number of file servers before and after migration, the extent of labor savings, and file server storage utilization rates.

West continues with a description of “five steps be undertaken in order to improve efficiency and operations in the public sector.” See the Cloud Computing Events section for more details on the event.

The Brookings Institution describes itself as follows:

The Brookings Institution is a nonprofit public policy organization based in Washington, DC. Our mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations that advance three broad goals:

  • Strengthen American democracy;
  • Foster the economic and social welfare, security and opportunity of all Americans and
  • Secure a more open, safe, prosperous and cooperative international system.

Brookings is proud to be consistently ranked as the most influential, most quoted and most trusted think tank.

The preceding post is repeated from Windows Azure and Cloud Computing Posts for 4/6/2010+

<Return to section navigation list> 

Cloud Security and Governance

Dana Gardner’s Governance Grows More Integral to Managing Cloud Computing Security post of 4/7/2010 begins:

Most enterprises lack three essential ingredients to ensure that sensitive information stored in via cloud computing hosts remains secure: procedures, policies and tools. So says a joint survey called “Information Governance in the Cloud: A Study of IT Practitioners” from Symantec Corp. and Ponemon Institute.

“Cloud computing holds a great deal of promise as a tool for providing many essential business services, but our study reveals a disturbing lack of concern for the security of sensitive corporate and personal information as companies rush to join in on the trend,” said Dr. Larry Ponemon, chairman and founder of the Ponemon Institute.

Where is cloud security training?

Despite the ongoing clamor about cloud security and the anticipated growth of cloud computing, a meager 27 percent of those surveyed said their organizations have developed procedures for approving cloud applications that use sensitive or confidential information. Other surprising statistics from the study include:

  • Only 20% of information security teams are regularly involved in the decision-making process
  • Only 25% of information security teams aren’t involved at all
  • Only 30% evaluate cloud computing vendors before deploying their products
  • Only 23% require proof of security compliance
  • A full 75% believe cloud computing migration occurs in a less-than-ideal manner
  • Only 19% provide data security training that discusses cloud applications

Focusing on information governance

IT vendors and suppliers, including the survey sponsor, Symantec, are lining up to help fill the evident gaps in enterprise cloud security tools, standards, best practices and culture adaptation. Symantec is making several recommendations for beefing up cloud security, beginning with ensuring that policies and procedures clearly state the importance of protecting sensitive information stored in the cloud.

“There needs to be a healthy, open governance discussion around data and what should be placed into the cloud,” says Justin Somaini, Chief Information Security Officer at Symantec. “Data classification standards can help with a discussion that’s wrapped around compliance as well as security impacts. Beyond that, it’s how to facilitate business in the cloud securely. This cuts across all business units.” …

David Linthicum claims “Tech firms and advocacy groups come together to seek new regulations -- that could turn out to be disastrous” in his Proposed new cloud privacy rules could backfire post of 4/7/2010 to InfoWorld’s Cloud Computing blog:

Privacy advocacy groups and tech vendors -- the Electronic Frontier Foundation, the ACLU, eBay, Google, and Microsoft -- are urging Congress to revise privacy laws to regulate user information on the cloud. Ther vendors support the changes because they fear that without regulation and privacy guarantees, people could become uncomfortable with the cloud. While reasonable in concept, the ideas may not work.

The fact of the matter is that the United States has not updated its privacy laws since 1986. With the rapid rise of cloud computing and the fact that more and more sensitive data will be stored off-premise, many believe it's high time to revisit those rules to accommodate today's reality.

But I always get a bit nervous when software specialists, now involved with the cloud, work with the government to create new laws. Here are a few of my issues.

First, regulations have a tendency to stultify innovation as providers make sure they adhere to these new and typically confusing rules. We've seen this issue with the financial reporting guidelines that began to appear earlier this decade, and the proposed cloud privacy laws will initially have similar results.

Second, any regulations that dictate privacy requirements and mechanisms will be outdated pretty much by the time they pass Congress. Other issues will arise, and unless there is a dedicated agency constantly updating the regulation, matters will quickly become dysfunctional -- but please don't create another dedicated agency for this!

Finally, it's a new world order in the cloud. These regulations won't extend to other countries. However, other countries will follow with their own regulations, which will make the situation even more onerous.

So what should be done? The real work needs to be carried out by industry, meaning cloud providers, IT pros, and users -- you and me. We need to come together around detailed requirements regarding privacy and security, and we have to stop writing conceptual white papers. This means setting lines in the sand around how data is encrypted at rest and in flight, what access controls needs to be in place, and detailed enabling standards to make all of this work together.

It's pretty simple, unless you get the government involved -- then expenses increase and productivity decreases.

Chris Hoff (@Beaker) awards himself soothsaying points in a Good Interview/Resource Regarding CloudAudit from SearchCloudComputing… post of 4/6/2010:

The guys from SearchCloudComputing gave me a ring and we chatted about CloudAudit. The interview that follows is a distillation of that discussion and goes a long way toward answering many of the common questions surrounding CloudAudit/A6.  You can find the original here.

What are the biggest challenges when auditing cloud-based services, particularly for the solution providers?

Christofer Hoff:: One of the biggest issues is their lack of understanding of how the cloud differs from traditional enterprise IT. They’re learning as quickly as their customers are. Once they figure out what to ask and potentially how to ask it, there is the issue surrounding, in many cases, the lack of transparency on the part of the provider to be able to actually provide consistent answers across different cloud providers, given the various delivery and deployment models in the cloud.

How does the cloud change the way a traditional audit would be carried out?

Hoff: For the most part, a good amount of the questions that one would ask specifically surrounding the infrastructure is abstracted and obfuscated. In many cases, a lot of the moving parts, especially as they relate to the potential to being competitive differentiators for that particular provider, are simply a black box into which operationally you’re not really given a lot of visibility or transparency.
If you were to host in a colocation provider, where you would typically take a box, the operating system and the apps on top of it, you’d expect, given who controls what and who administers what, to potentially see a lot more, as well as there to be a lot more standardization of those deployed solutions, given the maturity of that space.

How did CloudAudit come about?

Hoff: I organized CloudAudit. We originally called it A6, which stands for Automated Audit Assertion Assessment and Assurance API. And as it stands now, it’s less in its first iteration about an API, and more specifically just about a common namespace and interface by which you can use simple protocols with good authentication to provide access to a lot of information that essentially can be automated in ways that you can do all sorts of interesting things with.

David Kearns asserts “Yale University and Canadian Privacy Commissioner offer negative -- and misinformed -- views on cloud computing” in his Clouded views on privacy post of 4/2/2010 to NetworkWorld’s Security blog:

Privacy and cloud computing have recently been in the news, with stories coming out of academia (Yale University) and government oversight agencies (Canadian Privacy Commissioner). Both, in my view, got it wrong.

First up, and easiest to deal with, is Yale. George Bush's alma mater recently decided to adopt Google Applications for Education, which would include changing from Horde e-mail to Gmail. (See the Yale Daily News story here.). This IT decision has been roundly denounced by some faculty members, who screamed loud enough to at least postpone the switchover.

Just what were their objections?

"Google stores every piece of data in three centers randomly chosen from the many it operates worldwide in order to guard the company's ability to recover lost information -- but that also makes the data subject to the vagaries of foreign laws and governments," according to one faculty member. I'd imagine, of course, that the faculty and students currently have no idea where their data is stored, though. Hopefully the IT department has at least a disaster-recovery plan, which includes off-site storage of data. …

Dave concludes:

Privacy and security are best arrived at through well-negotiated contracts between informed parties, not through the agenda-wielding of ivory tower proselytizers. Well, usually. But, as we've learned over and over again, it isn't the technology that's the problem -- it's the people and the politics.

Next issue we'll venture 700 km north of Yale to see how Canada's Privacy Commissioner tackles the cloud.

<Return to section navigation list> 

Cloud Computing Events

Christine Jacobs, Communications Officer, Governance Studies for the Brookings Institution announced in an e-mail this morning:

clip_image001Earlier today at Brookings, the federal government’s chief information officer, Vivek Kundra, spoke about how the government is leveraging cloud computing to deliver results for the American people.

Mr. Kundra also announced that the National Institute of Standards and Technology will host a “Cloud Summit” on May 20, with government agencies and the private sector. The Summit will introduce NIST efforts to lead the definition of the Federal Government’s requirements for cloud computing, key technical research, and United States standards development. Furthermore, Mr. Kundra stated that the government will engage with industry to collaboratively develop standards and solutions for cloud interoperability, data portability, and security. [Emphasis added.]

You can read his full remarks here and his accompanying presentation here.

clip_image001See Darrell West’s Saving Money Through Cloud Computing research paper from the Brookings Institution’s The Economic Gains of Cloud Computing event in the Windows Azure Infrastructure section.

The preceding two posts are repeated from Windows Azure and Cloud Computing Posts for 4/6/2010+

The Interop 2010 conference to be held 4/25 through 4/29/2010 in Las Vegas, NV will feature an Enterprise Cloud Summit chaired by Alistair Croll on 4/26/2010 from 8:30 AM to 4:30 PM PDT:

In just a few years, cloud computing has gone from a fringe idea for startups to a mainstream tool in every IT toolbox. The Enterprise Cloud Summit will show you how to move from theory to implementation. We'll cover practical cloud computing designs, as well as the standards, infrastructure decisions, and economics you need to understand as you transform your organization's IT. We'll also debunk some common myths about private clouds, security risks, costs, and lock-in.

On-demand computing resources are the most disruptive change in IT of the last decade. Whether you're deciding how to embrace them or want to learn from what others are doing, Enterprise Cloud Summit is the place to do it.

Sessions include:

    1. Practical Cloud Strategies: 8:45 am – 9:15 am
    2. Accelerating Development and Hiding Operations with PaaS: 9:15 am – 10:00 am
    3. What to Move to IaaS: 10:30 am – 11:00 am
    4. Connecting On-Premise and On-Demand with Hybrid Clouds: 11:00 am – 12:00 pm
    5. Cloud Design Patterns: 1:00 pm – 1:30 pm
    6. What End Users Know: Stories from Cloud Users: 1:30 pm – 2:15
    7. What's Next: Cloud Operators Talk Futures: 2:45 pm – 3:30 pm
    8. Public and Private: IT Policy in an On-Demand World: 3:30 pm – 4:15 pm
    9. Practical Steps for Cloud Adoption: 4:15 pm – 4:30 pm

In addition, the following conference sessions are related to cloud computing:

Lori MacVittie’s A Hardware Platform and a Virtual Appliance Walk into a Bar at Interop … essay of 4/7/2010 begins:

Invariably when new technology is introduced it causes an upheaval. When that technology has the power to change the way in which we architect networks and application infrastructure, it can be disruptive but beneficial. When that technology simultaneously requires that you abandon advances and best practices in architecture in order to realize those benefits, that’s not acceptable.

imageVirtualization at the server level is disruptive, but in a good way. It forces organizations to reconsider the applications deployed in their data center, turn a critical eye toward the resources available and how they’re partitioned across applications, projects, and departments. It creates an environment in which the very make-up of the data center can be re-examined with the goal of making more efficient the network, storage, and application network infrastructure over which those applications are delivered.

Virtualization at the network, layer, is even more disruptive. From a network infrastructure perspective there are few changes required in the underlying infrastructure to support server virtualization because the application and its behavior doesn’t really change when moving from a physical deployment to a virtual one. But the network, ah, the network does require changes when it moves from a physical to a virtual form factor. The way in which scale and fault-tolerance and availability of the network infrastructure – from storage to application delivery network – is impacted by the simple change from physical to virtual. In some cases this impact is a positive one, in others, it’s not so positive. Understanding how to take advantage of virtual network appliances such that core network characteristics such as fault-tolerance, reliability, and security are not negatively impacted is one of the key factors in the successful adoption of virtual network technology.

imageCombining virtualization of “the data center network” with the deployment of applications in a public cloud computing environment brings to the fore the core issues of lack of control and visibility in externalized environments. While the benefits of public cloud computing are undeniable (though perhaps not nearly as world-shaking as some would have us believe) the   inclusion of externally controlled environments in the organization’s data center strategy will prove to have its challenges.

Many of these challenges can be addressed thanks to the virtualization of the network (despite the lack of choice and dearth of services available in today’s cloud computing offerings).  …

UBM TechWeb presents Top [Media] Coverage Highlights from Cloud Connect with abstracts and links to articles related to the Cloud Connect 2010 conference held at the Santa Clara Convention Center on 3/15 to 3/18/2010. Following are a few recent examples:

IT Spending On Cloud Ratcheting Up
By Charles Babcock
April 5, 2010
InformationWeek
Market research for the venture capital firm, the Sand Hill Group, has concluded that cloud computing represents one of the largest new investment opportunities on the horizon. Rangaswami aired the report's conclusion at last month's Cloud Connect Conference and asked IBM's VP of Cloud Services Ric Telford what he thought: "I have no problem with those numbers (40% in three years; 70% in five) as long as you include the caveat, it could be any one of five delivery models."

Geo Tagged Cloud Zombies
By Oliver Marks
March 28, 2010
ZDNet
Rodney Joffe, Senior Vice President and Senior Technologist at Neustar, Inc (who offer directory and clearinghouse services to large and small telecommunications service providers), spelled out some amazing realities in his talk 'Cloud Computing for Criminals' at the recent Cloud Connect conference in Santa Clara California/

California:

Agility, Not Savings, May Be The True Value Of The Cloud
By Robert Mullins
March 19, 2010
Network Computing
There are ways to calculate the Return On Investment (ROI) when moving IT from the data center to the cloud, but experts say the savings to the IT budget is only a fraction of the reason to do so. Analysts and proponents of cloud computing discussed calculating the total cost of ownership (TCO) and the ROI of moving to cloud computing at Cloud Connect, a three-day conference this week in Santa Clara, Calif.

The cloud's three key issues come into focus
By David Linthicum
March 19, 2010
InfoWorld
'm writing this blog on the way back from Cloud Connect held this week in Santa Clara. It was a good show, all in all, and there was a who's-who in the world of cloud computing. I've really never seen anything like the hype around cloud computing, possibly because you can pretty much "cloudwash" anything, from disk storage to social networking. Thus, traditional software vendors are scrambling to move to the cloud, at least from a messaging perspective, to remain relevant. If I was going to name a theme of the conference, it would be "Ready or not, we're in the cloud."

Clear Skies Ahead: 10 Demystifying Cloud Computing Products From Cloud Connect
By n/a
March 18, 2010
Channel Web
Cloud Connect was held this week in Santa Clara, Calif. A place for folks to gather and learn more about the cloud and how it can ease business and reduce costs, Cloud Connect kicked off Monday.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr’s Introducing the Amazon Simple Notification Service post of 4/7/2010 describes Amazon Web Services new SNS feature and turns up the competitive heat on Windows Azure:

Today I'd like to tell you about our newest service, the Amazon Simple Notification Service.

We want to make it even easier for developers to build highly functional and architecturally complex applications on AWS. It turns out that applications of this type can often benefit from a publish/subscribe messaging paradigm. In such a system, publishers and receivers of messages are decoupled and unaware of each other's existence. The receivers (also known as subscribers) express interest in certain topics. The senders (publishers) can send a message to a topic. The message will then be immediately delivered or pushed to all of the subscribers to the topic.

The Amazon Simple Notification Service (SNS) makes it easy for you to build an application in this way. You'll need to know the following terms in order to understand how SNS works:

Topics are named groups of events or acess points, each identifying a specific subject, content, or event type. Each topic has a unique identifier (URI) that identifies the SNS endpoint for publishing and subscribing.

Owners create topics and control all access to the topic. The owner can define the permissions for all of the topics that they own.

Subscribers are clients (applications, end-users, servers, or other devices) that want to receive notifications on specific topics of interest to them.

Publishers send messages to topics. SNS matches the topic with the list of subscribers interested in the topic, and delivers the message to each and every one of them. Here's how it all fits together:

Jeff continues with a brief description of “what it takes to get started” with SNS.

I’m still waiting for Werner Vogels to chime in on AWS SNS.

<Return to section navigation list> 

blog comments powered by Disqus