2 Jan 2017

DAS-5/VU has been extended with 4 TitanX-Pascal GPUs. Check DAS-5 special nodes for an overview of all DAS-5 special nodes.

9 Nov 2016

CUDA 8 is now available on all DAS-5 sites with GPU nodes. Check the DAS-5 GPU page for usage info.

May, 2016

IEEE Computer publishes paper about 20 years of Distributed ASCI Supercomputer. See the DAS Achievements page.

28 Sep 2015

DAS-5/VU has been extended with 16 GTX TitanX GPUs.

6 Aug 2015

DAS-5/UvA has been extended with 4 GTX TitanX GPUs: two on both node205 and node206.

6 Jul 2015

DAS-5 is fully operational!

DAS Achievements

May, 2016: IEEE Computer publishes paper about 20 years of Distributed ASCI Supercomputer

The May 2016 issue of IEEE Computer Society’s flagship journal Computer contains an extensive research paper describing the history and impact of the Distributed ASCI Supercomputer (DAS). The paper is based on Henri Bal’s keynote talk upon the receipt of the Euro-Par Achievement Award and is written by the steering committee of DAS. It explains how DAS is unique in the world: because of its moderate scale (200 nodes spread over 6 clusters), we were able to build five successive generations that each were consistent with the research agenda at the time. By favouring coherence over scale, DAS enabled ground-braking Computer Science research for almost 2 decades, resulting in numerous awards and over 100 Ph.D. theses. The central organization through the ASCI research school was crucial to this success. These modest long-term investments in Computer Science infrastructure, from DAS-1 (built in 1997) to DAS-5 (2015), thus have proven to be extremely effective.

If you have used DAS for your research, please refer to this paper:

Henri Bal, Dick Epema, Cees de Laat, Rob van Nieuwpoort, John Romein, Frank Seinstra, Cees Snoek, and Harry Wijshoff:
"A Medium-Scale Distributed System for Computer Science Research: Infrastructure for the Long Term",
IEEE Computer, Vol. 49, No. 5, pp. 54-63, May 2016.
A draft version of this paper is here.

25 August, 2014: Henri Bal receives Euro-Par Achievement Award

Prof. Henri Bal of VU University, Amsterdam, has been announced winner of the 2014 Euro-Par Achievement Award. Recipients of this award are researchers with outstanding merit in parallel computing. Henri Bal received the award primarily for his research on the DAS systems over the past 18 years.

29 May, 2014: TU Delft team wins SCALE Challenge CCGrid 2014

A team of TU Delft consisting of Bogdan Ghit, Mihai Capota, Tim Hegeman, Jan Hidders, Dick Epema, and Alexandru Iosup won the SCALE Challenge CCGrid 2014 by their submission about BTworld, on analyzing large scale BitTorrent monitoring data.

The contribution of the TU Delft team was called V for Vicissitude: The Challenge of Scaling Complex Big Data Workflows

23 March, 2014: Jianbin Fang (TU Delft) wins Best Industrial Paper Award

Jianbin Fang, Henk Sips, Lilun Zhang, Chuanfu Xu, Yonggang Che, and Ana Lucia Varbanescu received the "Best Industrial Paper Award" for their paper "Test-Driving Intel Xeon Phi" at he 5th ACM/SPEC International Conference on Performance Engineering, ICPE 2014.

27 February, 2014: DAS-5 gets funding!

The Dutch Science organization NWO has announced that the DAS-5 proposal submitted by the DAS steering committee, headed by Prof. Henri Bal of VU University, will receive funding to build a next generation infrastructure. This will be the fifth time that a single project has received funding from the very competitive NWO program, which is quite an achievement.

We will now work hard on starting the procurement procedure, which is expected to be completed later this year. Stay tuned for announcements about DAS-5 availability!

12 November, 2012: Bogdan Ghit of TU Delft gets Best Paper award

At the 5th Workshop on Many-Task Computing on Grids and Supercomputers (co-located with SuperComputing '12 in Utah) the paper by Bogdan Ghit, Nezih Yigitbasi, and Dick Epema of TU Delft was given the best paper award. This research on "Resource Management for Dynamic MapReduce Clusters in Multicluster Systems" was carried out on DAS-4.

August 2012: Ana Lucia Varbanescu receives a Veni Grant for Large Scale Graph processing

There is a lot of research on large scale graph processing, which is an instrumental part of the BigData challenges the scientific community is currently facing and trying to solve. In this context, more efficient and more productive techniques for data processing are crucial.

In her VENI project, entitled "Graphitti: A Massively Parallel Framework for High-Performance Graph Analytics", Ana Lucia proposes to move away from the typical solution for larger scale graph analysis, which is "the larger, the better" and essentially comprises in adding more and more machines to an existing infrastructure, thus coping directly with larger data sets and more complex computation. Instead, she focuses on investigating more layers of parallelism, from the finest to the largest, allowing better utilization of computing nodes, and providing a systematic approach for quantifying the actual resource needs for given algorithms and data-sets.

This approach is based on the observation that the current large scale architectures are and will become more and more heterogeneous, and this heterogeneity can only be exploited by using these multiple parallelism layers. Instrumental in this observation is the DAS4 architecture: by including a variety of processors, accelerators, and combinations thereof, the system is representative for present and future graph analysis architectures, thus provinding and excellent tool for design, development, and experiments with this new approach to graph processing.

In fact, initial result with three graph analysis algorithms - breadth first search, all pairs shortest paths, and betweenness centrality - all run on various node configurations on the DAS4, have proven that using heterogeneous nodes can and will improve the performance per node with up to an order of magnitude, while combinations of nodes can tackle both the dataset size and the computation performance, providing speed-ups of 3 to 10 times when compared with a similar homogeneous infrastructure (i.e., considering all DAS4 nodes equal and only use inter-node parallelism).

These results have been essential in drafting the proposal and convincing the referees and grant committee that time has come to take into account the fine grain parallelization in graph processing. DAS4 will further be used for experiments in the Graphitti project.

17 July, 2012: Cees Snoek receives Netherlands Prize for ICT Research 2012

The Netherlands Prize for ICT research worth € 50,000 has been awarded this year to Dr Cees Snoek (University of Amsterdam). Computer scientist Snoek leads a research team that is working on the development of a smart search engine for digital video. He has already hit national and international headlines with this MediaMill Semantic Video Search Engine.

The jury was impressed by the quality and size of Cees Snoek's work. They therefore consider Cees Snoek to belong to the absolute top of his generation. 'He is also capable of conveying his ideas convincingly. He is an enthusiastic and inspiring lecturer and knows how to disseminate his research results in comprehensible language to a wider public,' says the jury. Snoek researches ways of translating pixels into words: undescribed video images are annotated according to recognised persons, objects, scenes and the interaction between these. Over the past ten years he has made considerable progress in this area. Large-scale multimedia processing is one of the primary application domains succesfully explored on DAS-4.

July-Dec 2012: Deltares explores through DAS-4 various cloud concepts to model and manage Dutch waterworks

Deltares works on smart solutions, innovations, and applications for people, environment, and society throughout the world. One of their products is Delft3D, a world-leading 3D modeling suite to investigate hydrodynamics, sediment transport and morphology and water quality for fluvial, estuarine and coastal environments. Every 5 years, the Dutch water environment needs to be inspected, which leads to high workload when the deadline for this inspection comes close. Furthermore, Deltares is setting up a new Portal service for its customers offering the calculation of complex models on the Deltares Systems.

Model calculations are generated by the Delft3D FLOW2 module, which is a multi-dimensional hydrodynamic simulation program, calculating non-steady flow and transport phenomena. The complex workloads generated by this module are currently executed on their H7 Computer Cluster.

Deltares would like to expand their computational resources using a mix of private and public cloud services. This hybrid of cloud services will consist of their own computer cluster and make temporary use of the DAS-4 cluster network, the Amazon EC2 service, and possibly more.

In the joint TUD-Deltares project, the DAS-4 has been instrumental in the design, implementation, and testing of a new scheduling framework that can efficiently schedule complex workloads with deadlines in hybrid clouds.

March 6, 2012: VU selected as CUDA Teaching Center

VU University Amsterdam has been selected as a CUDA Teaching Center by NVIDIA, the company renowned for its GPU-based high performance computing equipment. The award follows the submission of a proposal by Dr. Rob van Nieuwpoort, assistant professor at VU University, department of Computer Science.

VU University is the first in the Netherlands to receive this honor. The award is a recognition for its ongoing commitment to advance the state of art in parallel computing education using CUDA. CUDA is NVIDIA's parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of graphics processing units (GPUs). CUDA Teaching Centers are recognized institutions that have integrated GPU computing techniques into their mainstream computer science curriculum. As part of the CUDA Teaching Center program, NVIDIA has donated GPUs to support the University’s teaching efforts. The equipment will be integrated into the existing DAS-4 infrastructure.

Dr. van Nieuwpoort teaches GPU computing in the Computer Science master program, and has organized GPU-related training events, such as in-depth courses for PhD students and summer-schools for bachelor students. With this selection, VU University becomes part of a large and strong community of over 480 leading institutions around the world dedicated to GPU computing education and research. Dr. van Nieuwpoort's current research activities include the use of GPUs for radio astronomy, increasing the capabilities of the largest radio telescope in the world, LOFAR, and future instruments such as the SKA. This research is done in collaboration with ASTRON, the Netherlands institute for radio astronomy.

2010-2012: DAS helps conducting record-setting research

Used by researchers of the Parallel and Distributed Systems at TU Delft, the DAS system has been involved over the past three years in a number of experimental approaches that have set new standards for large-scale experimentation in computer science:

  • The largest BitTorrent observation, from 2010 and ongoing. Experiment size: thousands of BitTorrent trackers, representing hundreds of millions of files and users, observed for several years; over 20 TB of uncompressed data.
  • The largest observation of the population of a Massively Multiplayer Online Game, in 2010. Experiment size: 5 million players of Runescape, the unofficial world-record-holding game, in terms of open accounts.
  • The largest collection of Grid, P2P, and Gaming datasets. Size: tens of datasets for each domain, covering various distributed systems, hundreds of millions of users, and tens of operational years.

For all the above domains, DAS is being used to process the collected data.

24 January, 2012: DAS-4/ASTRON claims correlation work record

DAS-4/ASTRON has succesfully correlated three hours of data coming from 288 LOFAR antennas, possibly setting a world record. AARTFAAC (Amsterdam-ASTRON Radio Transients Facility and Analysis Centre) is a project that will use 288 individual LOFAR LBA dipoles, to detect transient phenomena in the radio sky.

At ASTRON, John Romein and his team captured over 3 hours of data from 288 antennas (576 dipoles), which resulted in 27 TB of data. However, the Blue Gene/P cannot handle more than 64 station inputs, so the data was moved to the DAS-4 computer cluster in Dwingeloo using a 10 Gb/s Ethernet connection, and with a number of quick "hacks", they were able to run the LOFAR correlator on a PC cluster. Using 21 machines, correlating the data took less than 9 hours. At 64 channels per subband, this resulted in 619 billion correlations. John Romein thinks that correlating 288 dual-polarized antennas may have broken the "correlation world record".

Dec 2011: DAS-4 instrumental in acquiring EIT ICT.Labs grant EUROPA

Data-intensive computing can benefit from cloud computing. The EIT ICT.Labs EUROPA activity is to create a cloud-based data-intensive computing infrastructure including elastic storage and indexing, with massively parallel and distributed, cloud-aware query processing, and programming models. The activity focuses on deriving the requirements for this infrastructure and applying the infrastructure to challenging use-cases in the emerging, commercially relevant areas of big data analytics, graph data management, and massive multiplayer gaming.

The DAS has played an important role in acquiring this grant, by offering a trustworthy experimental platform for a variety of use cases and scenarios.

Over the course of this project, the DAS plays a key role in real-world, truthful experimentation with distributed data processing. In particular, DAS-4 has been used during 2012 to assess the performance of a variety of large-scale data processing platforms, among which Hadoop, Hadoop Next Gen (YARN), Stratosphere, and Giraph.

7 December, 2011: Frank Seinstra wins EYR3 'Sustainability Prize'

On December 7 2011, VU computer science researcher Frank Seinstra has been awarded the 'Sustainability Prize' in the Enlighten Your Research 3 challenge, organized by SURFnet, SARA, BigGrid, and NWO. The prize was awarded for the most energy-efficient approach to solving large-scale scientific problems, as presented in his proposal "High-Performance Distributed Multi-Model/Multi-Kernel Simulations".

According to the jury report, "The proposal distinguishes itself from the other proposals because the lightpath is a component of the simulation instrument itself", and "The proposal deserves the sustainability prize because of the way it utilizes smart software that makes efficient use of the architecture and the resources".

The proposal constitutes a close collaboration between computer science researchers from VU University Amsterdam, astronomers from Leiden University (LU), and climate researchers from Utrecht University (UU). The co-applicants and collaborators are: (from VU) Frank Seinstra, Jason Maassen, Niels Drost, Maarten van Meersbergen, Ben van Werkhoven, and Prof. Henri Bal, (from LU) Inti Pelupessy and Prof. Simon Portegies Zwart and (from UU) Michael Kliphuis and Prof. Henk Dijkstra.

Apart from a 'sustainability trophy' and 15.000 Euro in prize money, to facilitate the research Frank Seinstra and his team will, in addition to their use of DAS-4, be granted free access to a diverse hardware infrastructure provided by SARA, SURFnet, and BigGrid for a period of two years.

15 November, 2011: DAS-4/VU on Graph 500 list

At the yearly SuperComputing conference, the DAS-4 cluster of VU University was announced to rank 15th on the world wide Graph500 list. A very good result, as all higher-ranked systems are significantly bigger.

The well-known TOP500 list ranks Supercomputers worldwide for their capacity to run extremely large traditional (linear algebra based) computations. In contrast, the new Graph500 benchmark ranks Supercomputers according to their ability to deal with huge unstructured graphs that are becoming more and more common in both Scientific and Commercial "Big Data" analysis.

DAS-4 is thus confirmed to be a very good platform for these novel application areas.

July 2011: Alexandru Iosup receives a Veni Grant for Massivizing Online Games using Cloud Computing

Hundreds of Massively Multiplayer Online Games (MMOGs) entertain over 250,000,000 online gamers in a maturing global market of over 7 billion Euros. In the Netherlands, gaming revenues exceed those of the film industry since 2007. To maintain competitiveness in games played over the Internet, Dutch studios (employing 1,500 developers) rely on innovation and advanced analysis of gamer profiles.

The exponential growth of the gaming community since 1998 means that resource and cost scalability is the biggest challenge of MMOGs. Faced with the Quality-of-Service constraints imposed by gamers, the current industry approach is to operate large-scale infrastructures. Resource-wise this approach is un-scalable for player surges, and cost-wise it blocks market access to amateur and small game developers as it requires millions of Euros of initial (risky) investment. Thus, Dutch companies such as Khaeon are unable to capitalize on their innovative game designs and sell at-loss their services to larger (foreign) companies, forfeiting major operational benefits.

To overcome the scalability barriers, I will study a new fabric for small gaming studios that leverages cloud computing resources. Game operators can lease resources from commercial clouds and add them to their infrastructure on-demand<97>exclusively when, where, and for how long needed; cloud operators can consolidate MMOG and other workloads to gain economies-of-scale and focus expertise. The use of clouds raises numerous distributed systems challenges; for example, variability in MMOG workload and cloud resource performance can incur, when unaccounted for, orders-of-magnitude higher operational costs. I will investigate and demonstrate three innovations: (i) quantifying the complete MMOG environment, including player-virtual world interaction and cloud performance variability; (ii) devising variability-aware mechanisms for efficient MMOG resource provisioning and allocation; (iii) exploiting the interplay between virtual worlds and analytics in MMOG workloads. This work also benefits the design of collaborative and interactive applications in scientific/commercial domains.

Initial results leading up to the formulation of the grant and proving for the first time that gaming workloads have complex additional components, in comparison to traditional web and scientific computing, have been obtained on the DAS family of systems.

The DAS-4 has been instrumental in acquiring this grant: its resources will be used for conducting exploration, validation, and evaluation studies for distributed gaming systems.

June 2010: WebPIE wins the IEEE SCALE challenge

The VU University WebPIE team was announced as the winner of the 3rd IEEE International Scalable Computing Challenge (SCALE 2010). The objective of the SCALE 2010 challange, sponsored by the IEEE Computer Society Technical Committee on Scalable Computing (TCSC), is “to highlight and showcase real-world problem solving using computing that scales”. SCALE 2010 was organised as part of CCGRID 2010, the 10th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing.

Jacopo Urbani, together with Niels Drost, presented WebPIE, the very large scale Hadoop based inference engine, running on 64 machines of the DAS-3 compute cluster. The competition involved submitting a paper, giving a presentation, doing a demo, and explaining the project in the demo/poster market. A live visualisation showed the following RDF graph growing before the eyes of the audience as WebPIE was racing through its inferences.

For more information, see the WebPIE homepage, and see the blog entries here and here.

2009: Game research on DAS included in famous Mathematics book

John Romein and Henri Bal's work on the boardgame Awari has been selected in Dr. Cliff Pickover's math book (and in its Dutch translation) as one of 250 milestones in the history of mathematics.

The research on Awari, proving that the game was a draw with perfect play, was completed on DAS-2 in 2002, but has been followed up by related work in the context of Computational Grids and Model Checking on DAS-3 and DAS-4 from 2008 onwards.

August, 2008: VU University researchers win two first prizes at DACH 2008

DACH 2008, or the First International Data Analysis Challenge for Finding Supernovae, is a competition which was held in conjuction with the IEEE Cluster/Grid 2008 international conference in Tsukuba, Japan. The competition was organized and supported by the IEEE Technical Committee on Scalable Computing, the Japan MEXT grant-in-aid for priority area research called Info-Plosion, and the Special Interest Group on High Performance Computing (SIGHPC) of the Information Processing Society of Japan (IPSJ).

The competition was driven by the observation that the importance of large scale data analysis increases every year, not only in the scientific domain (e.g. high energy physics, astronomy, biology), but also in industry (e.g. web search engines). The objective of the competition was to follow this emerging trend, and to encourage efficient data analysis efforts in distributed environments.

In the DACH challenge a large distributed database (of several hundreds of GBytes) of scientific data, gathered by the Subaru telescope in Hawaii, had to be searched to find new and unknown supernova candidates. A supernova is a phenomenon in which a star explodes in a spectacular manner, causing a very large amount of light to be emitted. For the calculations, the participants were given access to a supercomputer system comprising of 12 compute clusters distributed over Japan.

  • In the Basic Category, the goal was to process all the data as fast as possible. A VU University team consisting of Jason Maassen and Frank Seinstra won the first prize. Their Ibis-based solution obtained by far the fastest result. While their solution required only 36 minutes for all calculations, the second best team used more than 1 hour. All other teams obtained run-times of 3 to even more than 25 times longer.
  • In the Fault Tolerant Category, the goal was to process all the data as fast as possible, under artificial node failure. VU University's Kees van Reeuwijk won the first prize there. His solution was implemented in Maestro, a self-organizing data-flow programming model based on the Ibis IPL. Maestro was the only system that participated in the Fault Tolerant challenge. It used a total of 92 nodes, 34 of which were killed, and one of which crashed by itself. Maestro automatically restarted about 10 percent of all tasks, before returning the correct result.

May, 2008: VU University team wins first prize at SCALE 2008

SCALE 2008, or the First IEEE International Scalable Computing Challenge, is a competition organized by the IEEE Technical Committee on Scalable Computing (TCSC), and endorsed by the IEEE Technical Committee on Parallel Processing.

The objective of the competition, held in conjunction with the CCGrid 2008 international conference in Lyon, France, was to highlight and showcase real-world problem solving using scalable computing techniques. The contest focused on end-to-end problem solving using concepts, technologies and architectures (including clusters and grids) relevant to the overall scope of the TCSC. All participants in the challenge were expected to identify significant current real-world problems where scalable computing techniques can be effectively used, and to design, implement, evaluate and demonstrate their solutions.

At SCALE 2008 the VU University Ibis team presented a scalable distributed supercomputing solution for the multimedia domain. Specifically, the team developed an application in which a digital camera is capable of real-time 'recognition' of objects from a set of learned objects, while being connected to a world-wide grid system comprising of clusters in Europe (including DAS-3), the United States, and Australia.

With their application they demonstrated true wall-socket grid computing. The entire application, including all required libraries, were stored on a single memory stick, which could be plugged into any Linux or Windows laptop with an appropriate JVM installed. From there, the application was compiled and started, with the world-wide set of available grid resources being employed entirely transparently.

November, 2007: StarPlane Project Drives Global Innovation and Collaboration with Real-Time Distributed Computing

At SuperComputing 2007 (SC'07), Nortel issued a press release about the StarPlane project. In this project, the University of Amsterdam, VU University and the Dutch National Research and Education Network provider SURFnet investigate the use of dynamic 10G lightpaths for Grid computing, using DAS-3 as a testbed.

Quotes from the press release:

  • "The StarPlane project provides researchers with access to massive computing power delivering the equivalent of the processing capacity of 500 personal computers to the desktop. StarPlane uses pure optical technology to link the Distributed ASCI Supercomputer 3 (DAS-3) computer clusters at five locations in The Netherlands into a grid to enable delivery of bandwidth on-demand, e.g. enabling computer scientists to reconfigure the topology of the distributed supercomputer. On-demand service activation of photonic networking is delivered using an extension to Nortel's Dynamic Resource Allocation Controller (DRAC) platform."
  • "Our next-generation hybrid optical and packet switching network delivered a paradigm shift in research networking," said Kees Neggers, managing director, SURFnet. "The full photonic implementation of the SURFnet6 network brings alive the possibilities created by coupling the applications and the network and is delivering a flexible application network experience that puts the high-end users and advanced applications in the driver's seat."
  • "Traditionally networks are seen as unpredictable resources and this project is changing that picture allowing for a wealth of new research," said Cees de Laat, associate professor at the University of Amsterdam (UvA). "With grid middleware interacting directly on the nationwide photonic layer enabling specification of optimal topologies per computational job, we are able to add another dimension in the resource allocation algorithms."

July, 2007: MultimediaN/UvA researchers win 'Most Visionary Research Award' at AAAI 2007

AAAI 2007, or the 22nd Conference on Artificial Intelligence, was the 2007 edition of the leading conference series in the field of AI. It was held in Vancouver, British Columbia, Canada from July 22-26, 2007. The purpose of the conference was to promote research in AI and scientific exchange among AI researchers, practitioners, scientists, and engineers in related disciplines.

AAAI 2007 held an exciting new event, which took place during the opening reception: the AI Video Competition. The objective of the competition was to communicate to the world the fun of pursuing research in AI, and illustrate the impact of some of its application areas. Submitters were asked to create narrated videos of 1-5 minutes in length, focused on interesting AI research. Videos were then reviewed by an international program committee. The creators of the best videos were presented with awards named in honor of Shakey, SRI's pioneering robot.

MultimediaN is a Dutch public-private non-profit organization in which the scientific world cooperates with industrial and other non-profit institutions. Together they strive to achieve high-quality multimedia solutions for the digital world of today and tomorrow. MultimediaN researchers Frank Seinstra and Jan-Mark Geusebroek (ISLA, Informatics Institute, University of Amsterdam) submitted a video on their research project " Color-Based Object Recognition by a Grid-Connected Robot Dog" to the AI Video Competition, and won the award in the most prestigeous category: "Most Visionary Research". The video demonstrates their unique application in which a visual task is successfully performed by a robot which is connected to a set of compute clusters located around the world.

The application presented in the video is a close integration between a parallel image and video processing library implemented in C++ and MPI, and a wide-area communication and deployment framework, implemented in Java and Ibis. The system itself has been shown live at several international conferences, such as ICME 2005 (Amsterdam, The Netherlands), ECCV 2006 (Graz, Austria), SC2007 (Reno, NV, USA). Clearly, without the benefits of the Ibis system, moving the application from a controlled laboratory setting to a real-world and hostile grid environment would have been close to impossible.