No Apology for High Performance Computing (HPC)

A few months back, at one of my regular monthly CTO club gatherings here in Connecticut, an articulate speaker discussed the top three IT trends that are fundamentally poised to transform businesses and society at large. The speaker eloquently discussed the following three trends:

  • Big Data and Analytics
  • Cloud Computing
  • Mobile Computing

I do agree that these are indeed the top three IT trends in the near future – each at differing stages in adoption, maturity and growth. But these are not just independent trends. In fact, they are overlapping reinforcing trends in today’s interconnected world.

However, while discussing big data and analytics, the speaker made it a point to exclude HPC as an exotic niche area largely of interest to and (implying that it is) restricted to scientists and engineers and other “non-mainstream” analysts who demand “thousands” of processors for their esoteric work in such diverse fields as proteomics, weather/climate prediction, and other scientific endeavors. This immediately made me raise my hand and object to such ill-advised pigeon-holing of HPC practitioners – architects, designers, software engineers, mathematicians, scientists, and engineers.

I am guilty of being an HPC bigot. I think these practitioners are some of the most pioneering and innovative folk in the global IT community. I indicated to the speaker (and the audience) that because of the pioneering and path breaking pursuits of the HPC community who are constantly pushing the envelope in IT, the IT community at large has benefited from such mainstream (today) mega IT innovations including Open Source, Cluster/Grid computing, and in fact even the Internet. Many of today’s mainstream Internet technologies emanated from CERN and NCSA – both organizations that continue to push the envelope in HPC today. Even modern day data centers with large clusters and farms of x86 and other industry standard processors owe their meteoric rise to the tireless efforts of HPC practitioners. As early adopters, these HPC practitioners painstakingly devoted their collective energies to building, deploying, and using these early HPC cluster and parallel systems including servers, storage, networks, the software stack and applications – constantly improving their reliability and ease of use. In fact, these systems power most of today’s businesses and organizations globally whether in the cloud or in some secret basement. Big data analytics, cloud computing, and even mobile/social computing (FaceBook and Twitter have gigantic data centers) are trends that sit on top of the shoulders of the HPC community!

By IT standards, the HPC community is relatively small – about 15,000 or so practitioners attend the annual Supercomputing event. This year’s event is in Seattle and starts on November 12. But HPC practitioners have very broad shoulders and with very keen and incisive minds and a passionate demeanor not unlike pure mathematicians. Godfrey H. Hardy – a famous 20th century British mathematician – wrote the Mathematician’s Apology – defending the arcane and esoteric art and science of pure mathematics. But we as HPC practitioners need no such Apology! We refuse to be castigated as irrelevant to IT and big IT trends. We are proud to practice our art, science, and engineering. And we have the grit, muscle and determination to continue to ride in front of big IT trends!

I have rambled enough! I wanted to get this “off my chest” over these last few months. But with my dawn-to-dusk day job of thinking, analyzing, writing and creating content on big IT trends for my clients; and with my family and personal commitments, I have had little time till this afternoon. So I decided to blog before getting bogged down with yet another commitment. It’s therapeutic for me to blog about the importance and relevance of HPC for mainstream IT. I know I can write a tome on this subject. But lest my tome goes with me unwritten in a tomb, an unapologetic blog will do for now.

By the way, G. H. Hardy’s Apology – an all-time favorite tome of mine – is not really an apology. It’s one passionate story explaining what pure mathematicians do and why they do it. We need to write such a tome for HPC to educate the broader and vaster IT community. But for now this unapologetic blog will do. Enjoy. It’s dusk in Connecticut. The pen must come off the paper. Or should I say the finger off the keyboard? Adios.

Posted in Uncategorized | Leave a comment

OPEN VIRTUALIZATION ecosystem continues to gather momentum – New KVM Alliance

Today’s enterprise data center crisis is largely caused by the sprawl of under-utilized x86 systems, ever escalating electricity costs, and increasing staffing costs. Using virtualization to centralize and consolidate IT workloads, many organizations have significantly reduced their IT capital costs, reduced operational expenses, improved IT infrastructure availability, and achieved better performance and utilization. 

Last month, Red Hat, Inc. (NYSE: RHT) and IBM (NYSE: IBM) announced that they are working together to make products and solutions based on KVM (Kernel-based Virtual Machine) technology the OPEN VIRTUALIZATION choice for the enterprise. Several successful deployment examples i.e. the IBM Research Computing Cloud RC2 and BNP Paribas were highlighted.

Subsequently, later in the month, BMC Software, Eucalyptus Systems, HP, IBM, Intel, Red Hat, Inc., and SUSE today announced the formation of the Open Virtualization Alliance, a consortium intended to accelerate the adoption of open virtualization technologies including KVM.

The benefits of KVM ( include outstanding performance on industry standard benchmarks, excellent security and reliability, powerful memory management, and a very broad support for hardware devices including storage. Further, since KVM is part of Linux, clients can benefit from the numerous advantages of Linux including lower TCO, more versatility, and support for the widest range of architectures and hardware devices. Moreover, Linux performs, scales, is modular and energy-efficient, is easy-to-manage, and supports an extensive and growing ecosystem of ISV applications.

While we believe that OPEN VIRTUALIZATION holds great promise to address the crises in today’s centers and is a key enabling technology for clients contemplating a transition to cloud computing, its success – and those of the alliance members – will largely depend largely how this new alliance grows and how alliance members can:

  • Build a more complete and robust IT ecosystem that includes Independent Software Vendors (ISVs), Systems Integrators (SIs), and other data center/cloud solution providers
  • Provide a MEASURED MIGRATION  path to existing clients who have substantial IT investments on proprietary virtualization technologies
  • Deliver differentiated offerings (systems, complementary software, and services) that best address the growing client workloads and data center crises now and in the future.

More proof points for further momentum of this alliance in the future would be the participation of a major ISV or SI as key driving members of this alliance and/or the adoption of OPEN VIRTUALIZATION for mission critical environments at banks or large scale government environments that demand bullet proof security and reliability. We think this will happen – sooner than later as the KVM alliance momentum builds!

In the end, the Open Source (VIRTUALIZATION included) movement has always been about providing clients the flexibility of choice, growth, and customization by avoiding the proprietary traps of vendor lock in; yet maintaining the most stringent enterprise grade requirements of security, reliability, and quality of service!

Posted in Uncategorized | Leave a comment

MEASURED MIGRATION is Smart for the Datacenter and Clouds

Imagine the solar energy needed to convert the earth’s water mass to clouds! Likewise, with legacy IT investments estimated to be in the trillions of dollars in an interconnected global IT environment, the sheer effort to migrate even a modest fraction of these environments to the cloud can be colossal.  

Yet in the past few years the predominant debate in enterprise IT seems to be around the rate and pace of the transition to the cloud starting with the need to make the datacenter smart and agile.

While we believe that cloud computing will dramatically impact the way IT services are consumed and delivered in the future, we also believe that this transition must be thoughtful and measured. Companies must have a MEASURED MIGRATION trajectory that is staged to minimize risks and maximize returns. They must take great care to examine which business and IT processes can be migrated with an eye to optimizing their business metrics without assuming needless risk.

Numerous surveys suggest that cloud computing will be over a $100 billion opportunity by 2015 and a large fraction of IT solutions will be delivered over the cloud in the next few years. While we could debate on the precise estimates, we believe that:

  • The market opportunity is large with growth rates much faster than the overall IT industry,
  • Private and hybrid clouds will become the dominant cloud delivery models as enterprise workloads begin to leverage the promise of clouds and security concerns persist with public clouds,
  • Before making substantial new cloud investments, businesses will carefully examine the business case that will be primarily driven by their current and future workload needs, and lastly,
  • Customers will rely on cloud providers who have the deepest insights into their workloads and can deliver a broad portfolio of cloud software, services, and systems optimized to these workloads with a MEASURED MIGRATION strategy.

The winners, we believe, will be those IT solution providers who will not only have promising technology solutions in such cloud enabling technologies as virtualization, scalable file systems, end-to-end systems management, etc., but also have a strategic vision and execution path that facilitates this through MEASURED MIGRATION.

IBM in its Systems Software Division which is part of the Systems and Technology Group (STG) is one such large solution provider with an impressive array of over 16 cloud enabling IaaS technologies ranging from virtualization, systems management, to scalable file systems, to high availability and disaster recovery. But more importantly, in recent briefings, we were impressed by the strategy and vision articulated by the leaders of these IBM units. These leaders consistently emphasized the need to build end-to-end solutions and staged engagement methodologies that not only deliver the best in class technology solutions but also help clients with MEASURED MIGRATION as they modernize their datacenters or embark on the transition to cloud computing.

We heard these senior executives articulate the need for IT environments to be “tuned to task”, “optimized through comprehensive systems management”, “staged migration to private clouds and then seamlessly integrated with public clouds to manage spiky workloads”, etc. All this is critical for MEASURED MIGRATION.

In fact, at a later briefing, we learned that IBM has a growing Migration Services group that has grown by almost a factor of 10 in just these past 6 years or so. This “Migration Factory” is, we believe, a major driver of IBM’s substantial recent revenue growth across STG especially in the Linux/Unix market.

With thousands of successful migrations and competitive wins, we believe IBM and its ecosystem partners have the resources and track record to scale this MEASURED MIGRATION to the cloud. It’s a strategy that will ultimately – over the next decade or more – transition a significant part of today’s IT investments on our earth to the clouds!

Posted in Uncategorized | Leave a comment