Skip to main content

All Questions

Tagged with
Filter by
Sorted by
Tagged with
3 votes
0 answers
42 views

OpenMpi5 Partitioned Communication with multiple neighbors

I've been trying out the partitioned communication that was added in openmpi 5. I am aware that "The current implementation is an early prototype and is not fully compliant with the MPI-4.0 ...
user28656646's user avatar
1 vote
0 answers
65 views

Overlaying openMP onto MPI program causes slow down of the region parallelised with openMP

I have a particle simulation in C which is split over 4 MPI processes and running fast (compared to serial). However, one region of my implementation is N^2 complexity, where I need to compare each ...
Luna Morrow's user avatar
0 votes
1 answer
291 views

Xcode problem while executing C OpenMPI project

On an M3 Pro, in Xcode, I want to run a simple C source file: #include <mpi.h> #include <stdio.h> int main(int argc, char** argv) { MPI_Init(&argc, &argv); int world_size;...
FueledByPizza's user avatar
1 vote
0 answers
37 views

Custom MPI_Datatype inside OpenMPI MCA module

I'm implementing an Allreduce algorithm inside the mca/coll framework. The algorithm I'm implementing needs each node to send at each step of the computation only a part of the vector (like a ring ...
Saverio Pasqualoni's user avatar
0 votes
0 answers
23 views

MPI works with 4 processor but fails with more, problem with decomposition? Send/Recv? 2d Heat diffusion

I have to simulate 2D plate heat diffusion. Everything works fine with 4 tasks, but it doesn't when using N*4 tasks. I suspect the problem is in the logic of communications between the upper and lower ...
iustindinu's user avatar
0 votes
0 answers
39 views

This MPI_Gatherv code not working and I don't know why

I wrote this code in C that multiplies a matrix by a vector, locally in each process(ylocal) and then the root process gathers the results in its local vector(y). I am using MPI_Gatherv because it has ...
Nementh's user avatar
1 vote
0 answers
90 views

Generate and write to file a random matrix in a distributed, balanced way with MPI

I am trying to generate a random 4x4 matrix in a parallel, "even" way: given the 1D array (representing a matrix) dimension N (e.g. 16 in this case) and the number of processes NP, each ...
G. Ianni's user avatar
  • 149
1 vote
0 answers
89 views

C: MPI_ERR_BUFFER: invalid buffer pointer Open MPI

I'm studying for an exam at university about parallel programming with Open MPI. I'm trying to read a file with the master process (rank 0) and send some data to all workers processes, but I get a ...
VITO GIACALONE's user avatar
2 votes
1 answer
1k views

How to set openmpi C compiler when installed with conda?

I installed openmpi v4.1.6 (and dependencies) to a clean environment using anaconda conda-forge channel. After installing and attempting to compile, I get the error ------------------------------------...
Eric Van Clepper's user avatar
0 votes
0 answers
94 views

Segmentation Fault When Sending Arrays Over a Certain Size with Open MPI

I am writing a program to run with an ifiniband and intel-based cluster using openmpi, pmix, and SLURM scheduling. When I run my program on the cluster with an input matrix over 38x38 on each node, I ...
Another Shrubbery's user avatar
1 vote
1 answer
54 views

Is MPI_Ibcast buffered?

I'm using OpenMPI for a little project, and I've got a doubt. When I use MPI_Ibcast to send an array to all the other processes, can I start computing on the same array? Example: MPI_Ibcast(array, ... ...
Vincenzo's user avatar
0 votes
1 answer
79 views

MPI Broadcast on subset of MPI_COMM_WORLD results in deadlock

I don't get why the program deadlocks on rank=3 in first iteration. (Possibly different for you). I want to do some batch processing and if the batch size isn't a multiply of processes I have to ...
VladMir's user avatar
  • 23
0 votes
0 answers
56 views

mpi application performace get worse when adding more process

i write parallel binary search algorithm with MPI it works as expected in term of searching for a value but when the -n is 1 (serial) the total time is much lower than any value above that like 2, 4, ...
baha 's user avatar
  • 13
0 votes
0 answers
108 views

OpenMPI, how can multiple clients connected to one server be one communication?

I want all the clients connected to the server can be one communication. I have tried merge, but it just merge the intra-comm and one client, and the output is inter-comm which can't be the param of ...
du fei's user avatar
  • 21
2 votes
1 answer
69 views

Better way than loop over MPI_Send/MPI_Recv if data not divisable my COMM_SIZE?

I use an MPI program for parallelize batches. Imaging multidimensional MRI images (3 spatial dimensions, coil data, ...) and we have several of them aggregated on a batch dimension. The program should ...
VladMir's user avatar
  • 23

15 30 50 per page
1
2 3 4 5
18