Research Articles

Scalable Hierarchical Parallel Algorithm for the Solution of Super Large-Scale Sparse Linear Equations

[+] Author and Article Information
Bin Liu

e-mail: liubin@tsinghua.edu.cn
Department of Engineering Mechanics,
Tsinghua University,
Beijing, 100084, PRC

Yuan Dong

Department of Computer Science and Technology,
Tsinghua University,
Beijing, 100084, PRC
e-mail: dongyuan@tsinghua.edu.cn

1Corresponding author.

Manuscript received December 23, 2012; final manuscript received January 4, 2013; accepted manuscript posted January 23, 2013; published online February 5, 2013. Editor: Yonggang Huang.

J. Appl. Mech 80(2), 020901 (Feb 05, 2013) (8 pages) Paper No: JAM-12-1570; doi: 10.1115/1.4023481 History: Received December 23, 2012; Revised January 04, 2013; Accepted January 23, 2013

The parallel linear equations solver capable of effectively using 1000+ processors becomes the bottleneck of large-scale implicit engineering simulations. In this paper, we present a new hierarchical parallel master-slave-structural iterative algorithm for the solution of super large-scale sparse linear equations in a distributed memory computer cluster. Through alternatively performing global equilibrium computation and local relaxation, the specific accuracy requirement can be met in a few iterations. Moreover, each set/slave-processor majorly communicates with its nearest neighbors, and the transferring data between sets/slave-processors and the master-processor is always far below the communication between neighboring sets/slave-processors. The corresponding algorithm for implicit finite element analysis has been implemented based on the MPI library, and a super large 2-dimension square system of triangle-lattice truss structure under randomly distributed loadings is simulated with over 1 × 109 degrees of freedom (DOF) on up to 2001 processors of the “Exploration 100” cluster in Tsinghua University. The numerical experiments demonstrate that this algorithm has excellent parallel efficiency and high scalability, and it may have broad applications in other implicit simulations.

Copyright © 2013 by ASME
Your Session has timed out. Please sign back in to continue.



Grahic Jump Location
Fig. 1

A meshed 2-dimension solid structure is divided into sets/subdomains assigned to slave processors, and the dividing line cuts through the elements

Grahic Jump Location
Fig. 2

(a) Schematic diagram of inner nodes and the outer nodes of SetI, (b) local displacement-controlled relaxation, and (c) local force-controlled relaxation

Grahic Jump Location
Fig. 5

Relative residual of 1 × 109 DOFs test case decreases as a quasi-exponentially function of iterations number. The test case is divided into 2000 sets.

Grahic Jump Location
Fig. 4

Two-dimensional square truss system with random force on each node

Grahic Jump Location
Fig. 8

The parallel improvement of our algorithm is tested with 8 × 106 DOFs and 32 × 106s DOFs test cases

Grahic Jump Location
Fig. 6

The number of iterations required (NIR) to meet specific accuracy requirement (5 × 10–6) versus the number of DOFs in each set: a 2-dimension square system with 64 sets are tested with four different random loads

Grahic Jump Location
Fig. 7

The 2-dimension square system is tested with 2-2000 sets, and each set has half a million DOFs. The number of iterations required (NIR) to meet specific accuracy requirement (5 × 10–6) and elapsed time per iteration as functions of sets number, (a) and (b), respectively. NIR shows a rapid convergence and the elapsed time per iteration presents a very slow growth rate with increasing sets.

Grahic Jump Location
Fig. 3

Flow chart showing the hierarchical parallel algorithm

Grahic Jump Location
Fig. 9

A one-dimension spring system used to represent the generally symmetric linear system




Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In