Amazon's most powerful cloud server yet, the Cluster Compute Instance, was made available Tuesday in EC2, designed for high performance computing and able to be grouped together with other Cluster servers via high speed networking.
Cluster Compute Instances are being launched as "an open beta," said Peter DeSantis, Amazon Web Services general manager, in an interview.
DeSantis said Cluster Instances will be interconnected with 10Gb Ethernet; nodes in a clulster will be able to communicate at ten times the speed of standard EC2 instances.
In addition, DeSantis said the Cluster instances are racked together to maximize physical proximity and minimize the distance of any communications between nodes. In the past, Elastic Compute Cloud users have had no control over where two servers that they might activate would be located; now they can direct that Cluster instances be launched to "a placement group" that ensures physical proximity, said DeSantis.
DeSantis said EC2 is now used for workloads ranging from genomic sequence analysis to financial modeling and automotive design. The Cluster Computer Instances are designed to support high performance computing tasks, including parallel processing workloads. "These customers have told us that many of their largest, most complex workloads required additional network performance," he said.
The Cluster server will be Amazon's most expensive. It will be priced at $1.60 per hour, compared to $.085 for a Small Linux server, $.34 for a Large Linux server and $.68 for an Extra Large Linux server.
At the same time, there will be substantially more resources devoted to the Cluster Instance, which will also run Linux. The CPU of a Small Linux server consists of a single EC2 compute unit, measured as 1 GHz Xeon or Opteron processor in 2007. The CPU of a Cluster Instance is the equivalent of 33.5 of such processors. The memory of Small EC2 server is limited to 1.7 GBs; a Cluster Instance has 23 GBs. A Small server has 160 GBs of local storage versus 1,690 GB of Cluster storage.
DeSantis said Amazon Web Services wanted to start out with Cluster Instances running Linux "to guarantee performance." At a future date it will add additional operating systems, which in the past has meant the number two offering was Windows Server. Up to eight Cluster Instances may be grouped together running on current generation, four-core Intel Nehalem CPUs in two-way servers, with a total of 32 cores in the cluster. "Any EC2 customer can make use of Cluster instances," DeSantis noted, or in other words, high performance computing is available to those who self-provision a cluster and pay the fees. Amazon spokesmen said larger clusters may be built using Cluster Instances but the largest that can be automatically self-provisioned by an end user was eight instances.
DeSantis said the Cluster Instance has been optimized to efficiently use AWS' Elastic Block Storage for storing results of a computational run. The Cluster instances work will all other standard AWS services as well, such as S3 long term storage and SimpleDB database service.
Amazon's announcement cited the Lawrence Berkeley National Laboratory as a primary facility supporting scientific research sponsored by the U.S. Department of Energy. Keith Jackson, a computer scientist at the lab, said he and other researchers had collaborated with Amazon Web Services in test driving the Cluster instances. "In our series of benchmark tests, we found our HPC applications ran 8.5 times faster on Cluster Compute Instances than the previous EC2 instance types," he said.
The announcement also quoted computer science professor David Patterson at the University of California at Berkeley as saying the Cluster instance "fills an important need among scientific computing professionals" and makes EC2 "more viable for technical computing." Patterson is the inventor of RAID storage and RISC computing.
Return to linux news headlines
View Linux News Archive