Strumenti Utente

Strumenti Sito


magistraleinformaticanetworking:spd:lezioni15.16

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
magistraleinformaticanetworking:spd:lezioni15.16 [30/07/2016 alle 22:15 (9 anni fa)] Massimo Coppolamagistraleinformaticanetworking:spd:lezioni15.16 [30/07/2016 alle 22:42 (9 anni fa)] (versione attuale) Massimo Coppola
Linea 10: Linea 10:
   * 16/03/2016 **MPI** MPI_Comm_split; collective communications (definition and semantics, execution environment, basic features, agreement of key parameters among the processes, constraints on Datatypes and typemaps for collective op.s, overall serialization vs synchronization, potential deadlocks); taxonomy of MPI collectives (blocking/non-blocking, synchronization/communication/communication+computation, asymmetry of the communication pattern, variable size versions, all- versions); MPI_IN_PLACE and collective op.s.; basic blocking collective operations (barrier, broadcast, gather/gatherV, scatter/scatterV, allgather, alltoall/alltoallv).   * 16/03/2016 **MPI** MPI_Comm_split; collective communications (definition and semantics, execution environment, basic features, agreement of key parameters among the processes, constraints on Datatypes and typemaps for collective op.s, overall serialization vs synchronization, potential deadlocks); taxonomy of MPI collectives (blocking/non-blocking, synchronization/communication/communication+computation, asymmetry of the communication pattern, variable size versions, all- versions); MPI_IN_PLACE and collective op.s.; basic blocking collective operations (barrier, broadcast, gather/gatherV, scatter/scatterV, allgather, alltoall/alltoallv).
   * 18/03/2016 **MPI LAB** Examples with derived datatypes (indexed, vector and their combinations).   * 18/03/2016 **MPI LAB** Examples with derived datatypes (indexed, vector and their combinations).
-  * 23/03/2016 **MPI** Composed Datatype memory layout: explicitly setting and getting extent; Compute and communication collectives, MPI_Reduce, semantics; MPI Operators (arithmetic, logic, bitwise, MINLOC and MAXLOC) and their interaction with Datatypes; defining MPI custom operators via MPI_Create_op.+  * 23/03/2016 **MPI** Composing Datatypes, derived datatypes memory layout: explicitly setting and getting extent; Compute and communication collectives, MPI_Reduce, semantics; MPI Operators (arithmetic, logic, bitwise, MINLOC and MAXLOC) and their interaction with Datatypes; defining MPI custom operators via MPI_Create_op.
   * 06/04/2016 **MPI LAB** Design and implementation of a simple farm skeleton in MPI. Reusability and separation of concerns in MPI: exploiting communicators for skeleton and inside skeleton implementation; simple and muliple buffering; different communication primitives (Synch/Buffered and Blocking/non blocking) wrt farm load distributions strategies: Round Robin, Job request, implicit job request with double buffering.    * 06/04/2016 **MPI LAB** Design and implementation of a simple farm skeleton in MPI. Reusability and separation of concerns in MPI: exploiting communicators for skeleton and inside skeleton implementation; simple and muliple buffering; different communication primitives (Synch/Buffered and Blocking/non blocking) wrt farm load distributions strategies: Round Robin, Job request, implicit job request with double buffering. 
   * 08/04/2016 **TBB (Thread Building Blocks)** TBB C++ template library overview: purpose, abstraction mechanisms and implementation layers (templates, runtime, supported abstractions, use of C++ concepts); tasks vs threads and parallel patterns hierarchical composability; parallel_for, ranges and partitioners; task partitioning and scheduling, grain size and affinity; quick note on the use of lambda expression, containers and mutexes.   * 08/04/2016 **TBB (Thread Building Blocks)** TBB C++ template library overview: purpose, abstraction mechanisms and implementation layers (templates, runtime, supported abstractions, use of C++ concepts); tasks vs threads and parallel patterns hierarchical composability; parallel_for, ranges and partitioners; task partitioning and scheduling, grain size and affinity; quick note on the use of lambda expression, containers and mutexes.
magistraleinformaticanetworking/spd/lezioni15.16.1469916925.txt.gz · Ultima modifica: 30/07/2016 alle 22:15 (9 anni fa) da Massimo Coppola

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki