The incredible shrinking supercomputer ranking
Indias tech poster boy of the eighties, C-DAC, can
boast of a glorious past. However recent advancements on the supercomputing
front, together with C-DACs failure to find a market for its products,
have given rise to a big question: can the organisation keep up with the times?
C-DAC needs to spruce up its actor Indias supercomputing dream may
soon be relegated to the history books, says SRIKANTH RP
IN THE late
eighties, Indians had good reason to be proud. Back then, when the US government
refused to allow India to procure a Cray supercomputer for weather forecasting,
our very own Centre for Development of Advanced Computing (C-DAC) came up with
an indigenous alternativethe Parambased on a massive parallel processing
architecture. That the country could develop a supercomputer twice as powerful
as a serial Cray of the timeand at one-third the costmade headlines
the world over.
Unfortunately, C-DAC has continued to rely on its iconic status instead of moving
ahead. Analysts tracking the supercomputer market believe that C-DACs
architecture, though revolutionary in the 80s, is expensive today. Which is
why even state-owned firms now prefer MNC or other vendors for their supercomputing
For instance, the Tata Institute of Fundamental Research (TIFR), Mumbai, decided
to go in for a Cray supercomputer with a sustained speed of 205 Gigaflops at
a cost of around Rs 8 crore (funded by the department of atomic energy). Further,
the Institute of Mathematical Sciences (IMSc), Chennai, set up a cluster using
Linux to develop a supercomputer codenamed Kabru (see box: The making of Kabru)
with a sustained speed of 959 Gigaflops. C-DAC should have ideally competed
and won these orders, especially since both institutions are state-owned.
The US government has relaxed quite a few restrictions on
the export of supercomputers to India, with the result that vendors such as
Cray have been aggressively hawking their products in the country. Its
not as if Indo-American relations have suddenly reached a new high. The simple
fact is that with desktop processors becoming faster, and supercomputing features
becoming available in the Linux Kernel, cluster-based supercomputers are gaining
groundsomething that both the US government and vendors realise.
The Linux cluster set up at IMSc is the countrys most powerful academic
supercomputer. In comparison, C-DACs top-end offering (Param Padma) yields
only 540 Gigaflops. The result? C-DAC does not find a place in the Top500 supercomputer
list any more (source: top500.org).
Inconsistent policy hampers delivery
Analysts are asking whether C-DACs architecture is
still relevant and cost-effective. We posed this question to the man who is
said to be the father of the supercomputing revolution in India, Dr Vijay Bhatkar.
While Dr Bhatkar is confident that C-DACs architecture is still relevant
in todays market, he feels that lack of effort on the governments
part is largely responsible for C-DACs sluggish reputation.
We proved in the late eighties that by using clusters instead of large
SMP machines for high-performance computing, we can deliver the same performance
at a fraction of the cost, declares Bhatkar. This vision has, however,
not changed with the times. I would blame the lack of a consistent policy from
the government, rather than C-DAC, for failing to deliver on the promise of
a great future.
Whats ironic is that C-DACs architecture has
been adopted by many supercomputing players to good effect. For the uninitiated,
C-DACs Param Padma is architecturally a cluster with IBM pSeries Quad
CPU nodes combined with C-DACs proprietary ParamNet II high-speed interconnect.
Due to tremendous cost advantages, the tendency to use clusters instead of large
SMP machines for high-performance technical computing is increasing. For example,
in the last Top500 supercomputer list, 208 systems were classified as clusters,
up from 149 in the earlier list. This trend is expected to accelerate, showing
that clustering is here to stay.
So whats wrong with C-DACs architecture?
Though C-DAC has developed a supercomputer capable of a peak computing power
of one Teraflop, the sustained performance, which analysts say is the real benchmark,
is only around 540 Gigaflops.
Amal DSilva of Summation Enterprises, whose company was involved in putting
together Kabru, the fastest academic supercomputing cluster in India, has an
explanation. Choosing expensive (RISC/Unix) nodes instead of the lower-cost
dual CPU Intel/AMD nodes, along with low volumes for ParamNet II, probably prevents
the current implementation from being cost-effective. But he is quick
to add that without C-DAC, Indians would not have been exposed to high-performance
Says Prof N D Hari Dass of IMSc, From what Ive seen of Param Padma,
coupled with our own experience in building a Teraflop-class supercomputer,
Id say that the most serious hurdle to Param Padmas acceptance is
the cost. The real strength of
||According to Amal DSilva, choosing expensive
(RISC/Unix) nodes instead of the lower-cost dual CPU Intel/AMD nodes, along
with low volumes for ParamNet II, probably prevents the current implementation
from being cost-effective
C-DACs architecture is the ParamNet switch. It should complement this
by going in for cheaper computer nodes. This is how we significantly brought
down our costs. Once the cost becomes favourable, there would be greater demand
for C-DACs products.
Analysts agree that C-DAC has to look at improving the efficiency of its cluster
architecture. This efficiency can be calculated by looking at the ratio of the
sustained to peak performance of a machine. Param Padmas efficiency is
44 percent, while Kabrus efficiency is a much higher 70 percent.
The finest talent isnt enough
Though C-DAC possesses the best technical talent in the country, it has been
slow to react to the marketplace since every decision needs government approval.
And while the organisation has moved into areas such as local language computing
and e-governance, the focus on supercomputers seems to be fading.
So is the organisation on the right track? While R Ramakrishnan, who recently
took over as executive director of C-DAC, admits theres not enough demand
for supercomputers in India, he points out that it is unfair to compare a technical
R&D organisation with a product company. This has always been the
pioneers problem. We pioneered the supercomputing and local-language software
revolution in India. Other companies followed our steps and became success stories.
Our main focus, then and now, is to be a pioneer in developing new technologies
that will benefit Indian society. However, we recognise the changing times,
and you will see a lot of action on the supercomputing front soon.
Another problem, as Dr Bhatkar points out, is the lack of consistent government
policy and support. Countries such as the US and Japan (which incidentally boasts
of the fastest supercomputer in the world today) do not hesitate to fund supercomputing
projects extensively since they are considered to be strategic to their future.
In contrast, the Indian government has been following a flip-flop policy with
most of our R&D institutions not knowing what the user organisations need.
Dr Bhatkar says that unless user organisations and R&D institutions collaborate,
this disconnect will remain.
The lack of a clear government policy could also be seen when the American government
relaxed regulations in 1992-93 for the export of supercomputers. Questions were
then asked whether India should continue to invest in producing its own supercomputers.
After considerable indecision, the government gave the go-ahead to C-DAC to
continue its focus on its next mission, Param 10000. The importance of investing
in supercomputing research was proved when India conducted nuclear tests in
1998, and the US reacted haughtily by imposing sanctions. Now, after TIFR has
installed a Cray, the same questions are cropping up.
G Nagarjuna, who is associated with the Free Software Foundation, has a valid
point: A country striving to achieve self-reliance in supercomputing should
invest for making supercomputers within the country. Cost should never be a
hindrance to achieving independence and self-reliance. We should attain these
virtues at any cost. This sentiment is echoed by Dr Bhatkar, who says
that the Indian government has to put its weight behind its people.
Amal DSilva of Summation feels the lifting of any embargo by the US should
not make a difference. While the US has offered to remove the few remaining
organisations from the denied parties list, this decision, just
like the one made to impose it, is totally arbitrary and can be reversed anytime,
depending on political expediency. This can hit purchasing institutions hard
as they will have to go through the entire process once again. A worse fate
awaits those users who have sanctions imposed after they have received supercomputers.
The vendors and their Indian representatives are not willing to deliver any
kind of support, even within the warranty period. This leaves the user with
expensive and useless hardware.
While C-DAC cannot change government policy, it can try to create a scenario
where its supercomputers make good economic sense. It has already started an
initiative to build a nationwide grid of supercomputers. The grid would allow
academic institutions to tap the processing power of supercomputers instead
of purchasing one. Sources at C-DAC also say that the institution is trying
to adopt lower-cost Intel/AMD nodes instead of the more expensive RISC/Unix
Says Nagarjuna, It was reported recently that the department
of atomic energy made a grant of Rs 3.5 crore to IMSc. The Linux cluster, which
clocked a peak speed of 1.382 Teraflops, was realised at a cost of Rs 2.5 crorea
fraction of what supercomputers of this pedigree would cost. As you can see,
there is enough expertise within the country. Consider this: the Kabru
uses only half the processors to achieve 85 percent of the performance that
Indias fastest supercomputer manages to do with double the number of processors.
Nagarjuna believes that Indian R&D institutions should join the kernel and
compiler communities to develop and improve the free libraries required for
supercomputing. If such options are considered, perhaps several Params can be
While C-DAC is a wonderful R&D institution that may have no parallel in
India today, the organisation needs active support from the government to focus
and channelise its efforts the way it had done during the creation of the Param
8000. Perhaps the government could use a supercomputer to clear the cobwebs
surrounding this great institution.
||SIMD (singe instruction, multiple data)
|| MIMD (multiple instruction multiple data)
||Ready to use
||Ready to use
||Can be complex
||Single system image
||Single system image
||Single system environment
|Source: Summation Enterprises
||Computer / Processor manufacturer
||Sustained value (GFlops)
Peak Value (GFlops)
||Earth Simulator Center
|Earth Simulator / 5120 NEC
|Lawrence Livermore National Laboratory
|Thunder / Intel Itanium2 Tiger4 1.4GHz - Quadrics
/ 4096 California Digital Corporation
|Los Alamos National Laboratory
|ASCI Q - AlphaServer SC45, 1.25 GHz / 8192 HP
|BlueGene/L DD1 Prototype (0.5GHz PowerPC 440 w/Custom)
/ 8192 IBM/ LLNL
|TungstenPowerEdge 1750, P4 Xeon 3.06 GHz, Myrinet
/ 2500 Dell
|xSeries Cluster Xeon 2.4 GHz - Gig-E / 574 IBM
|xSeries Xeon 3.06 GHz - Gig-E / 256 IBM
|KABRU Pentium Xeon Cluster 2.4 GHz - SCI 3D / 288
||BladeCenter Xeon 3.06 GHz, Gig-Ethernet / 252 IBM
||SuperDome 875 MHz/HyperPlex / 384 HP
||Tech Pacific Exports
||Integrity Superdome, 1.5 GHz, HPlex / 128 HP
Source: top500.org (June 2004)
|KABRU is the name of a tall Himalayan peak that remains unconquered by
human beings. Though tall, it is not the highest peak in the Himalayas.
The idea for Kabru was born when Prof Hari Dass of IMSc, Chennai, started
looking for a supercomputer to handle his theoretical physics research,
primarily for large-scale simulations in the area of the Lattice Gauge theory.
Because much of the professors problems were communications-intensive,
the choice of the interconnect was a major factor as it determines the speed
of the cluster.
Dass started looking at various options for the
interconnect such as Gigabit Ethernet and channel bonding. After considering
various options, he selected Wulfkit, a high-speed interconnect solution
from a Norway-based firm. A pilot gave impressive results. Internode bandwidth
was 260 MB per second compared to 80 MB per second for Gigabit Ethernet.
The Internode latency was under five microseconds compared to approximately
120 microseconds on Gigabit Ethernetalmost 25 times lower.
Satisfied with the results, IMSc decided to go
in for Wulfkit as the main cluster interconnect technology. But one of
the key challenges in developing a cluster is controlling the amount of
heat generated due to dense packing. A densely-packed cluster such as
Kabru (144 dual Xeon nodes in six 42u racks) needs considerable cooling,
and careful attention needs to be paid to the air-conditioning and supply.
To ensure that the air-conditioning and UPS vendors were able to provide
the infrastructure, the cluster was set up in two phases, 80 nodes in
the first phase followed by 64 nodes in the second. The second phase was
completed in time for submission of the high performance Linpack (HPL)
benchmark results to the Top500 list which ranks supercomputers worldwide.
The results that followed stunned many across the world.
The final results on the HPL benchmark were 959 Gigaflops (sustained)
with a peak performance of 1,382 Gigaflops, making the Kabru the countrys
most powerful academic supercomputer.
It is also proof that for most high-end supercomputing problems, expensive
and proprietary systems are unnecessary.
ANAND Babu, who works for California Digital in
the USthe company is owned by an Indianhas built a supercomputer
which is the second-fastest in the world. It has 4096 Itanium2 64 bit
processors, 8 TB of RAM, Quadrics Interconnect, and runs GNU/Linux. Codenamed
Thunder, the machine boasts more than 20 trillion floating point operations
per second, and commands second place in the current Top500 list of the
worlds most powerful supercomputers. All the software Babu has released
is under GNU GPL.