如果找到了对您有用的资料,烦请点击右手边的Google广告支持我继续共享知识,谢谢! http://dengpeng.spaces.live.com/

2007年6月4日星期一

433-678 Cluster and Grid Computing Quiz Ver. 0.91

PDF Version Download

433-678 Cluster and Grid Computing Quiz

(Brain dump, enjoy~)

 

1. Which of the following is a reliable communication and delivery protocol?

a)TCP/IP, b)UDP, c)MPI, d)None of the above

Answer: a

/*****************************

Solution:

TCP is a connection- oriented protocol.

UDP is a connectionless protocol. It sends and forgets.

MPI (Message Passing Interface) is a de facto standard in parallel computing.

*****************************/

2. In Java Threads, which of the following methods execute threads without blocking?

a)Thread.run(), b)Thread.join(), c)Thread.start(), d)Thread.interrupt()

Answer: c

/*****************************

Solution:

.run():If this thread was constructed using a separate Runnable run object, then that Runnable object's run method is called; otherwise, this method does nothing and returns.

.join():Waits for this thread to die.

.start():Causes this thread to begin execution; the Java Virtual Machine calls the run method of this thread.

.interrupt():Interrupts this thread.

*****************************/

3. Which of the models below follows SIMD?

a)Stream Processing, b)Uniprocessor, c)Vector Processing, d)None of the above

Answer: c

/*****************************

Solution:

Flynn’s Law is a classification of computer architecture. It is based upon the number of current instruction and data streams available.

  Single instruction

Multiple instruction

Single data SISD MISD
Multiple data SIMD MIMD

1. Single Instruction, Single Data stream (SISD) - a sequential computer which exploits no parallelism in either the instruction or data streams. Examples of SISD architecture are the traditional uniprocessor machines like a PC or old mainframes.

2. Multiple Instruction, Single Data stream (MISD) - unusual due to the fact that multiple instruction streams generally require multiple data streams to be effective. However, this type is used when it comes to redundant parallelism, as for example on airplanes that need to have several backup systems in case one fails. Some theoretical computer architectures have also been proposed which make use of MISD, but none have entered mass production.

3. Single Instruction, Multiple Data streams (SIMD) - a computer which exploits multiple data streams against a single instruction stream to perform operations which may be naturally parallelised. For example, an array processor or GPU.

4. Multiple Instruction, Multiple Data streams (MIMD) - multiple autonomous processors simultaneously executing different instructions on different data. Distributed systems are generally recognised to be MIMD architectures; either exploiting a single shared memory space or a distributed memory space.

As of 2006, all the top 10 and most of the TOP500 supercomputers are based on a MIMD architecture.

l Single Program, Multiple Data streams (SPMD) - multiple autonomous processors simultaneously executing the same program (but at independent points, rather than in the lockstep that SIMD imposes) on different data. Also referred to as 'Single Process, multiple data'[6]. SPMD is the most common style of parallel programming.

l Multiple Program Multiple Data (MPMD) -- multiple autonomous processors simultaneously operating at least 2 independent programs. Typically such systems pick one node to be the "host" ("the explicit host/node programming model") or "manager" (the "Manager/Worker" strategy), which runs one program that farms out data to all the other nodes which all run a second program. Those other nodes then return their results directly to the manager.

Stream Processing: MISD

Uniprocessor: SISD

Vector Processing: SIMD

*****************************/

4. “Speedup obtained by distributed execution of a problem is limited by the non-parallelizable (or serial) component of the problem”. This law is better known as:

a)Gustafson’s Law, b)Amdahl’s Law, c)Moore’s Law, d)Brooks’ Law

Answer: b

/*****************************

Solution:

1. Gustafson’s Law: The speed up of program depends on the non-parallelized part of process.

2. Moore’s Law: Transistors on a single chip doubles ~ every 18–24 months.

3. Brooks’ Law: Adding manpower to a late software project makes it later.

*****************************/

5. List 4 points of difference between Cluster and Symmetric Multi-Processing (SMP) systems:

Answer:

/*****************************

Solution:

l Single point of failure in SMP

l Cluster can be built on heterogeneity hardware and OS

l Cluster is more scalable than SMP

l Cluster is easier to implement

*****************************/

6. Which of these is NOT an operating system used in cluster?

a)Linux, b)PBS, c)Windows NT, d)Solaris MC

Answer: b

/*****************************

Solution:

1. PBS (Portable Batch System): is a computer software job scheduler that allocates network resources to batch jobs. It can schedule jobs to execute on networked, multi-platform UNIX environments.

2. Solaris MC (Multi-Computer): Solaris MC is a prototype distributed operating system for multi-computers (i.e., clusters of nodes) that provides a single-system image: a cluster appears to the user and applications as a single computer running the Solaris operating system.

*****************************/

7. MPI programming is generally carried out at which level of granularity?

a)Large/Task, b)Medium/Function, c)Fine/Compiler, d)Very Fine/Hardware

Answer: a

/*****************************

Solution:

1. Large/Task: PVM, MPI

2. Medium/Function: Threads

3. Fine/Compiler: lines of code

4. Very Fine/Hardware: CPU, Hardware

*****************************/

8. “MPI_Comm_rank(MPI_COMM_WORLD, &rank);” this function call:

a) Initializes the MPI communicators

b) Asks for the rank of the host process

c) Sends a message to the process related “rank”

d) None of the above

Answer: b

/*****************************

Solution:

l Initializes the MPI communicators: MPI_Comm_size(MPI_COMM_WORLD, & numtasks);

l Sends a message to the process related “rank”: MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm);

buf the address of the data to be sent

count the number of elements of datatype buf contains

datatype the MPI datatype

dest rank of destination in communicator comm

tag a marker used to distinguish different message types

comm the communicator shared by sender and receiver

ierror the fortran return value of the send

*****************************/

9. List 4 stages of the methodical design of parallel programs

Answer:

/*****************************

Solution:

1. Partitioning: Decomposition of computational activities and the data into small tasks – there exist number of paradigms – e.g. master worker, pipeline, divide and conquer, SPMD, and speculation.

2. Communication: Flow of information and coordination among tasks that are created in the portioning stage.

3. Agglomeration (分块): Tasks and communication structure created in the above stages are evaluated for performance and implementation cost. Tasks may be grouped into larger tasks to improve communication. Individual communications can be bundled.

4. Mapping / Scheduling: Assigning tasks to processors such that job completion time is minimized and resource utilization is maximized. Even the cost of computation can be minimized based on QoS requirements.

*****************************/

10. MOSIX provides single system image at which level

a)Middleware, b)Application, c)File system, d)Kernel/OS

Answer: d

/*****************************

Solution:

MOSIX is a management system for Linux clusters and organizational Grids that provides a Single-System Image (SSI). In a MOSIX cluster/Grid there is no need to modify or to link applications with any library, to copy files or login to remote nodes, or even to assign processes to different nodes - it is all done automatically, like in an SMP.

*****************************/

11. At what level Google implements its Search Engine

a) Application Level, b) Operating System Level, c)Hardware Level, d)Compiler Level

Answer: a

/*****************************

Solution:

From users’ view, Google provides Application Level Single System Image, but we do not know whether Google implements SSI on other level.

 

Application and Subsystem Level

Google, PBS, Oracle 10g, SUN NFS, …

Operating System Kernel Level

Solaris MC, MOSIX, …

Hardware Level

SCI (Scalable Coherent Interface), …

*****************************/

没有评论: