Cyfronet supercomputers' archive

Table of content

CDC Cyber 72

Decommissioned

The first computing machine was launched at Cyfronet on June 27, 1975. It was one of two CDC CYBER 72 computers purchased by Poland in 1973: the first was installed at the Institute for Nuclear Research in Świerk near Warsaw, and the second at Cyfronet (then known as the CYFRONET-Kraków Academic Computing Center). There was a two-year gap between the two installations, mainly due to embargo-related issues affecting socialist countries, including Poland. Universities and research institutes in both cities were equipped with terminals consisting of a punched card reader and a line printer, allowing remote program execution and result retrieval in printed form.

The supercomputer installed at Cyfronet, manufactured by the American company Control Data Corporation (CDC), was based on the 6400 series architecture. The remarkably high computing speed for that time was achieved through the system’s multiprocessor structure and multi-threaded computing technique. The central unit of the CYFRONET-Kraków computing system had one central processing unit (CPU), which was controlled by ten peripheral processing units (PPUs). All data transfers between the main memory and peripheral devices were handled by the PPUs. These processors performed basic arithmetic and logical operations, preparing and assigning tasks to the central processor, which allowed for its most efficient use. The average processing speed of the central unit was one million operations per second.

Configuration of the Cyber 72 system at ŚCO CYFRONET-Kraków Computing Center

  • 1 central processor with 98,324 words of main memory (60-bit word, 1 μs memory cycle),
  • 10 peripheral processors with 4,096 words of memory (12-bit),
  • 12 input/output (I/O) channels,
  • Operator console, dual-screen, equipped with a keyboard,
  • Card reader capable of reading 1,200 cards per minute,
  • Line printer printing 1,200 lines per minute (136 characters per line),
  • Card punch (250 cards per minute),
  • The computer was initially equipped with three type 841 disks, each with a capacity of 2×40 MB; in 1979, they were replaced with three type 844 disks, each with a capacity of 100 MB.

Convex supercomputers

Decommissioned

In 1991, thanks to favorable changes in the political climate and funding secured from the State Committee for Scientific Research, a mini-supercomputer - the Convex C120 (Lajkonik) - was launched at Cyfronet. It was the first vector computer in Central Europe, featuring a vector-scalar processor (20 MFlops), 64 MB of RAM, and 4 GB of disk space. Lajkonik quickly proved insufficient to meet the growing computational demands of the Kraków scientific community.

Therefore, just a year later, a single-processor Convex C3210 (Krak) was installed, offering a maximum system performance of 50 MFlops, 128 MB of RAM, and 10 GB of disk space. Krak was significantly faster, more modern, and fully compatible with Lajkonik. In 1993, it was upgraded to the C3220 version by adding a second processor and increasing the memory by a factor of 2.5.

In 1994, after nearly a year of efforts to obtain an export license from the U.S. government, the Convex C3820 (Smok) computer was installed at Cyfronet. At the time, it was the largest computer in Central Europe, equipped with two vector-scalar processors.

The Smok supercomputer offered a performance of 480 MFlops for 32-bit arithmetic and 240 MFlops for 64-bit arithmetic. It had 256 MB of RAM and 10 GB of disk storage.

As with its predecessors, the system quickly reached full capacity shortly after installation. This created an urgent need for further and rapid expansion of computing resources.

The solution came in 1995 with the launch of Cyfronet’s first massively parallel computer — the Convex Exemplar SPP1000/XA (Anna).

The initial configuration included 16 HP PA-RISC 7100 processors, 1.5 GB of RAM, and 32 GB of disk storage, delivering a maximum performance of 3.2 GFlops. Anna operated under the SPP-UX 3.0.4 operating system, based on HP-UX. After expanding the configuration with an additional 16 processors, the computer reached a theoretical peak performance of 7.68 GFlops.

In June 1996, the Anna supercomputer became the first Cyfronet system to appear on the TOP500 list of the world’s fastest supercomputers, ranking 408th (at that time as the SPP1200). The supercomputer maintained this position in the next edition of the list in November, listed as the SPP1600.

IBM RS/6000 SP

Decommissioned

In 1996, the IBM RS/6000 SP supercomputer was installed, equipped with 5 Power2 processors, providing a maximum theoretical computing power of 1.3 GFlops.

The computer had 2.5 GB of RAM and 27 GB of disk storage.

The IBM RS/6000 SP operated on the AIX operating system.




























SGI Origin 2000/2800

Decommissioned

In 1998, the SGI Origin 2000 (Grizzly) computer began operating at the Center. In its full configuration, later known as the SGI Origin 2800, it featured 128 R14000 processors running at 500 MHz, 64 GB of RAM, and 1.4 TB of disk storage.

The supercomputer operated on the IRIX 6.5 operating system (a UNIX variant developed by Silicon Graphics) and supported network interfaces such as FDDI, ATM, Ethernet, and HIPPI.

The SGI 2800 was built on a modular S2MP (Scalable Shared-memory MultiProcessing) architecture, which enabled access to shared memory. From the user's perspective, the SGI 2800 functioned as a single computer, where any task could utilize the entire available memory and all processors.

















SUN supercomputers

Decommissioned

In 2001, the SUN Fire 6800 (Saturn) server was installed. The supercomputer's architecture included 24 UltraSPARC III processors running at 900 MHz and 24 GB of RAM, delivering a maximum theoretical performance of 43.2 GFlops. Saturn was equipped with 138 GB of disk storage. The system's network interfaces included SUN Gigabit Ethernet and Fast Ethernet. It operated on the Solaris 8 operating system.

The supercomputer was part of a grid-portal environment for high-performance computing within a Sun computer cluster, developed under the PROGRESS project.

A year later, the Center acquired a SUN Fire V880 server, featuring 4 UltraSPARC III processors at 750 MHz, 8 GB of RAM, and 216 GB of disk storage. The server was connected to a StorEdge T3 ES disk array, consisting of 18 disks with 36 GB of capacity each.

















Zeus (RackSaver PC)

Decommissioned

In 2002, the Zeus cluster (RackSaver PC) was launched. It consisted of 27 dual-processor nodes housed in a single rack (a total of 54 processors):

  • 4 nodes with 2 Pentium III 1 GHz processors, 512 MB of RAM, and 40 GB of disk storage each
  • 23 nodes with 2 Xeon 2.4 GHz processors, 1 GB of RAM, and 40 GB of disk storage each

The RackSaver PC achieved a theoretical computing power of 2.1 TFlops.

The cluster had a total of 25 GB of RAM and 1080 GB of disk storage (27 × 40 GB).

It used a Guardian 4400 disk array (640 GB) and an internal network based on an HP Pro Curve switch (40 × 100 Mbps ports, 1 × 1 Gbps port).

The system operated on the RedHat LINUX 6.2 operating system.

The supercomputer was installed as part of a European project Crossgrid.

















HP Integrity Superdome

Decommissioned

In 2003, the first HP Integrity SuperDome computer in Poland (Jowisz) was installed. It featured 8 Intel Itanium2 processors running at 1.5 GHz, 8 GB of RAM, and 2 TB of disk storage.

Jowisz achieved a computing performance of 48 GFlops.

It operated on the HP-UX 11i operating system.

















Baribal - SGI ALTIX 3700

Decommissioned

Since 2006, the SGI Altix 3700 (Baribal) operated at the Center, initially offering users 128 Intel Itanium2 processors (1.5 GHz) with a total performance of 384 GFlops, 256 GB of RAM, and 6.7 TB of local disk storage.

After an upgrade to 256 processors, the supercomputer reached a computing power of 1.5 TFlops.

Baribal was an SMP-type computer, meaning it was seen by users as a single system with unified memory and a single operating system — in this case, SUSE Linux Enterprise Server 10.

It operated using the Numalink4 interconnect and employed the PBS Pro queueing system.

The supercomputer was decommissioned at Cyfronet in November 2014.

















IBM Blade Center HS21/HS21XM

Decommissioned

Since 2007, Cyfronet users have had access to the IBM Blade Center HS21/HS21XM supercomputer (Mars).

Its architecture included:

  • 56 quad-core nodes with Intel Xeon 5150 processors running at 2.66 GHz and 8 GB of RAM each,
  • 40 eight-core nodes with Intel Xeon E5345 processors running at 2.33 GHz and 16 GB of RAM each.

In total, Mars provided users with 544 computing cores and 1.2 TB of RAM, delivering a computational power of 6 TFLOPS.

The compute nodes in the cluster were interconnected via a 1 GB Ethernet network.

The supercomputer operated on the RedHat Linux 5 operating system and used the Torque/Maui job scheduling system.

















SGI Altix 4700

Decommissioned

In 2007, the SGI Altix 4700 (Panda) supercomputer was launched, featuring 32 Intel Itanium2 processors (1.66 GHz), 64 GB of RAM, and 2.3 TB of disk storage.

Theoretical computational power of the system was 212 GFlops.

In terms of architecture, Panda, like Baribal, was an SMP-type supercomputer. It was based on the same operating system — SUSE Linux Enterprise Server 10.

The supercomputer operated using the Numalink4 interconnect and utilized the PBS Pro job scheduling system.

The use of the Panda supercomputer at Cyfronet concluded in 2015.

















Reconfigurable RASC platform

Decommissioned

An integral component of the Panda supercomputer was the Reconfigurable Application Specific Computing (RASC) module. This technology enabled users to create application-specific hardware using reconfigurable logic elements.

RASC was connected to two Virtex4LX200 FPGA chips, each offering 200,000 reconfigurable logic cells. In addition, two 80 MB blocks of QDR RAM were available.

The implementation of RASC in the Altix series of supercomputers meant that the FPGA was treated as an integral part of the system rather than a peripheral device. Thanks to high-speed connections to processor resources, the typical bottleneck of peripheral FPGA processors at the time was avoided.

















HP Cluster Platform 3000 BL

Decommissioned

In 2008, the HP Cluster Platform 3000 BL supercomputer, equipped with Intel Xeon Quad-Core processors, was installed. It featured 4 TB of RAM, 3 TB of HDD storage, and delivered a performance of 55.5 TFlops. At that time, Zeus was the fastest supercomputer in Poland and Central and Eastern Europe.

Zeus was listed in the TOP500 ranking twelve times, including four consecutive editions where it ranked among the world’s top 100 supercomputers—reaching as high as 81st place on the June 2011 list—and ten times as the fastest in Poland. After an upgrade in 2013, Zeus achieved 374 TFlops in LINPACK benchmarks, placing 114th on the June 2013 TOP500 list. It appeared on the TOP500 for the last time in 2015, ranking 387th (November 2015 list).

According to a 2014 comparison, Zeus’s computing power was equivalent to that of 20,000 standard PCs of the time.

Over the years, Zeus supported numerous large-scale computational experiments, including research conducted at CERN, such as work that contributed to the discovery of the Higgs boson.

Zeus was a heterogeneous computing supercomputer, consisting of four classes of nodes with varying architectures, tailored to meet the diverse needs of scientific communities.

Zeus Computing Cluster

  • Operating system: Scientific Linux 6
  • Configuration: HP BL2x220c
  • Processors: Intel Xeon
  • RAM: 23 TB
  • Computing power: 169 TFlops

Zeus BigMem Cluster

  • Operating system: Scientific Linux 6
  • Configuration: HP BL685c
  • Processors: AMD Opteron
  • RAM: 26 TB
  • Computing power: 61 TFlops

Zeus GPGPU

  • Operating system: Scientific Linux 6
  • Configuration: HP SL390s
  • Processors: Intel Xeon
  • RAM: 3.6 TB
  • Computing power: 136 TFlops

Zeus FPGA

  • Operating system: Ubuntu
  • Configuration: 2 x M-503 (Virtex-6 LX240T FPGA)
  • Processors: Intel i7
  • RAM: 12 GB
  • Computing resources:
  • 241,152 FPGA Logic Cells
  • 768 DSP48 Slices
  • 14,976 kbits Block RAM

Zeus vSMP

  • A group of vSMP-type virtual machines designed for memory-intensive computations.
  • The vSMP virtualization software allowed for creating any number of virtual machines of any size (as a multiple of available physical servers).
  • In 2012, the Zeus vSMP system ranked among the top three most powerful machines of its kind in the world.