- Have each worker write to its own unique file. (Use getCurrentTask and then the ID property to generate a file)
- Have each loop iteration write to a unique file, file name based on the iteration index
- Use parallel.pool.DataQueue to send the results back to the client, and then use afterEach to append the data to a file as it is received.
Writing to .mat simultaneously with parfor loop
3 views (last 30 days)
I'm running the test for 100 iteration and each iteration will save the the output into the same .mat file. At the moment, I'm running the test without any problem for over 100 iteration using Parfor loop. However, I'm worry about will the .mat file get corrupted when multiple workers are writing the output into the same file? I can only try with the maximum of 4 workers, but my main concern is, will this become a problem when the test is run for writing maybe 10 output simultaneously in the same file? Just need to make sure this is safe to do.
*Writing save inside a parfor loop is not possible. saveData function is use to write the output from the function1 to one .mat.
parfor i = 1 : 100
saveData(x, y, z);
if ~exist(iMatFile, 'file')
save(iMatFile,GetVariableName(iTtiNum, bFd), '-v7.3');
Edric Ellis on 10 Aug 2022
Don't attempt to save from multiple workers to the same file, this will sometimes fail (exactly when/how it fails depends on your system). I'm going to assume you have a good reason for wishing to save the results as you proceed rather than simply return a result from your parfor loop. You have a number of options here:
Options (1) and (2) will need you to combine the files afterwards. Option (3) means that all the data gets transferred from workers to client, so this might possibly be slow if the data is really large.
More Answers (1)
Walter Roberson on 10 Aug 2022
This is a valid concern. save() does not promise to be thread-safe
If I recall correctly, a few weeks ago there was a post from someone who was encountering corruption under these circumstances. I believe that they ended up writing to separate files. Only one file per worker is required, not one file per iteration.