Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C / C++ / MFC
  4. Mpi blocking communication

Mpi blocking communication

Scheduled Pinned Locked Moved C / C++ / MFC
databasegraphics
2 Posts 2 Posters 9 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M Offline
    M Offline
    Member 15181211
    wrote on last edited by
    #1

    I'm currently working on a lattice Boltzmann code (D3Q27) employing MPI for parallelization. I've implemented MPI 3D topology for communication, and my code snippet handles communication as follows, I also have the same structure for the communication between front-back and up-down. void Simulation::Communicate(int iter) { int tag_xp = 0; int tag_xm = 1; int tag_yp = 2; int tag_ym = 3; int tag_zp = 4; int tag_zm = 5; MPI_Status status; if (SubDomain_.my_right_ != MPI_PROC_NULL) { std::vector send_data; for (int k = 0; k < SubDomain_.my_Nz_; k++) { for (int j = 0; j < SubDomain_.my_Ny_; j++) { if (SubDomain_.lattice_[SubDomain_.my_Nx_ - 2][j][k] == nullptr) { for (int dir = 0; dir < _nLatNodes; dir++) { send_data.push_back(0.0); } } else { for (int dir = 0; dir < _nLatNodes; dir++) { send_data.push_back(SubDomain_.lattice_[SubDomain_.my_Nx_ - 2][j][k]->m_distributions[dir]); } } } } std::vector recv_data(send_data.size()); MPI_Sendrecv(send_data.data(), send_data.size(), MPI_DOUBLE, SubDomain_.my_right_, tag_xp, recv_data.data(), recv_data.size(), MPI_DOUBLE, SubDomain_.my_right_, tag_xm, MPI_COMM_WORLD, &status); int index = 0; for (int k = 0; k < SubDomain_.my_Nz_; k++) { for (int j = 0; j < SubDomain_.my_Ny_; j++) { for (int dir = 0; dir < _nLatNodes; dir++) { SubDomain_.lattice_[SubDomain_.my_Nx_ - 1][j][k]->m_distributions[dir] = recv_data[index]; index++; } } } } if (SubDomain_.my_left_ != MPI_PROC_NULL) { std::vector send_data; for (int k = 0; k < SubDomain_.my_Nz_; k++) { for (int j = 0; j < SubDomain_.my_Ny_; j++) { if (SubDomain_.lattice_[1][j][k] == nullptr) { for (int dir = 0; dir < _nLatNodes; dir++) { send_data.push_back(0.0); } } else { for (int dir = 0; dir < _nLatNodes; dir++) { send_data.push_back(SubDomain_.lattice_[1][j][k]->m_distributions[dir]); }

    J 1 Reply Last reply
    0
    • M Member 15181211

      I'm currently working on a lattice Boltzmann code (D3Q27) employing MPI for parallelization. I've implemented MPI 3D topology for communication, and my code snippet handles communication as follows, I also have the same structure for the communication between front-back and up-down. void Simulation::Communicate(int iter) { int tag_xp = 0; int tag_xm = 1; int tag_yp = 2; int tag_ym = 3; int tag_zp = 4; int tag_zm = 5; MPI_Status status; if (SubDomain_.my_right_ != MPI_PROC_NULL) { std::vector send_data; for (int k = 0; k < SubDomain_.my_Nz_; k++) { for (int j = 0; j < SubDomain_.my_Ny_; j++) { if (SubDomain_.lattice_[SubDomain_.my_Nx_ - 2][j][k] == nullptr) { for (int dir = 0; dir < _nLatNodes; dir++) { send_data.push_back(0.0); } } else { for (int dir = 0; dir < _nLatNodes; dir++) { send_data.push_back(SubDomain_.lattice_[SubDomain_.my_Nx_ - 2][j][k]->m_distributions[dir]); } } } } std::vector recv_data(send_data.size()); MPI_Sendrecv(send_data.data(), send_data.size(), MPI_DOUBLE, SubDomain_.my_right_, tag_xp, recv_data.data(), recv_data.size(), MPI_DOUBLE, SubDomain_.my_right_, tag_xm, MPI_COMM_WORLD, &status); int index = 0; for (int k = 0; k < SubDomain_.my_Nz_; k++) { for (int j = 0; j < SubDomain_.my_Ny_; j++) { for (int dir = 0; dir < _nLatNodes; dir++) { SubDomain_.lattice_[SubDomain_.my_Nx_ - 1][j][k]->m_distributions[dir] = recv_data[index]; index++; } } } } if (SubDomain_.my_left_ != MPI_PROC_NULL) { std::vector send_data; for (int k = 0; k < SubDomain_.my_Nz_; k++) { for (int j = 0; j < SubDomain_.my_Ny_; j++) { if (SubDomain_.lattice_[1][j][k] == nullptr) { for (int dir = 0; dir < _nLatNodes; dir++) { send_data.push_back(0.0); } } else { for (int dir = 0; dir < _nLatNodes; dir++) { send_data.push_back(SubDomain_.lattice_[1][j][k]->m_distributions[dir]); }

      J Offline
      J Offline
      jschell
      wrote on last edited by
      #2

      Use code tags when you post code. The context of your questions is not clear. There is theory and there is practice (implementation). To which do your questions refer? Are you asking if your code is right? And only that? Are you asking if your theory is right? And only that? If the second then the code doesn't help. If the first then you should provide some specifics about which part of the code you think has a problem. If you are mixing the two then I would suggest that you rethink what it is that you actually need to ask. I suspect also that at least for the theory you need to run on hardware that tests this. Not clear to me how your code insures that the hardware is even being used. That however might both be because you didn't use code tags and because I didn't look that closely at the code.

      1 Reply Last reply
      0
      Reply
      • Reply as topic
      Log in to reply
      • Oldest to Newest
      • Newest to Oldest
      • Most Votes


      • Login

      • Don't have an account? Register

      • Login or register to search.
      • First post
        Last post
      0
      • Categories
      • Recent
      • Tags
      • Popular
      • World
      • Users
      • Groups