使用 ImageJ 在大型集群上编写具有多个时间点的并行迭代反卷积脚本

Scripting parallel iterative deconvolution with many time points on a large cluster using ImageJ

本文关键字:时间 并行 迭代 脚本 卷积 大型 ImageJ 使用      更新时间:2023-09-26

我有一个有趣的 ImageJ 脚本问题,我想分享一下。一位成像科学家给了我一个包含 258 个时间点的数据集,其中包含 13 张"Z 堆栈"图像。总共有 3,354 张 tif 图像。他有一个使用 imageJ 宏录制功能制作的宏,该宏在他的 Windows 机器上工作,只是需要很长时间。 我可以访问一个非常大的学术计算集群,在那里我可以想象要求与时间点一样多的节点。输入文件是标记为"img_000000000_ZeissEpiGreen_000.tif"的 3,354 个 tif 图像,九位数字增加 1-258,三位数字是 Z 堆栈顺序 1-13,另一个输入文件是用小珠子制成的点扩散函数图像在亚分辨率下。 这是宏"iterative_parallel_deconvolution.ijm"。我更改了路径以对应于群集中的必要路径。

//******* SET THESE VARIABLES FIRST!  ********
path = "//tmp//images//";
seqFilename = "img_000000000_ZeissEpiGreen_000.tif";
PSFpath = "//tmp//runfiles//20xLWDZeissEpiPSFsinglebeadnoDICprismCROPPED64x64.tif";
numTimepoints = 258;
numZslices = 13;
xyScaling = 0.289 //microns/pixel
zScaling = 10 //microns/z-slice
timeInterval = 300; //seconds
//********************************************
getDateAndTime(year1, month1, dayOfWeek1, dayOfMonth1, hour1, minute1, second1, msec); //to print start and end times
print("Started " + month1 + "/" + dayOfMonth1 + "/" + year1 + " " + hour1 + ":" + minute1 + ":" + second1);
//number of images in sequence
fileList = getFileList(path);
numImages = fileList.length;
//filename and path for saving each timepoint z-stack
pathMinusLastSlash = substring(path, 1, lengthOf(path) - 1);
baseFilenameIndex = lastIndexOf(pathMinusLastSlash, "''");
baseFilename = substring(pathMinusLastSlash, baseFilenameIndex + 1, lengthOf(pathMinusLastSlash));
saveDir = substring(path, 0, baseFilenameIndex + 2);
//loop to save each timepoint z-stack and deconvolve it
for(t = 0; t < numTimepoints; t++){
        time = IJ.pad(t, 9);
        run("Image Sequence...", "open=[" + path + seqFilename + "] number=" + numImages + " starting=1 increment=1 scale=100 file=[" + time + "] sort");
        run("Properties...", "channels=1 slices=" + numZslices + " frames=1 unit=um pixel_width=" + xyScaling + " pixel_height=" + xyScaling + " voxel_depth=" + zScaling + " frame=[0 sec] origin=0,0");
        filename = baseFilename + "-t" + time + ".tif";
        saveAs("tiff", saveDir + filename);
        close();
        // WPL deconvolution -----------------
        pathToBlurredImage = saveDir + filename;
        pathToPsf = PSFpath;
        pathToDeblurredImage = saveDir + "decon-WPL_" + filename;
        boundary = "REFLEXIVE"; //available options: REFLEXIVE, PERIODIC, ZERO
        resizing = "AUTO"; // available options: AUTO, MINIMAL, NEXT_POWER_OF_TWO
        output = "SAME_AS_SOURCE"; // available options: SAME_AS_SOURCE, BYTE, SHORT, FLOAT
        precision = "SINGLE"; //available options: SINGLE, DOUBLE
        threshold = "-1"; //if -1, then disabled
        maxIters = "5";
        nOfThreads = "32";
        showIter = "false";
        gamma = "0";
        filterXY = "1.0";
        filterZ = "1.0";
        normalize = "false";
        logMean = "false";
        antiRing = "true";
        changeThreshPercent = "0.01";
        db = "false";
        detectDivergence = "true";
        call("edu.emory.mathcs.restoretools.iterative.ParallelIterativeDeconvolution3D.deconvolveWPL", pathToBlurredImage, pathToPsf, pathToDeblurredImage, boundary, resizing, output, precision, threshold, maxIters, nOfThreads, showIter, gamma, filterXY, filterZ, normalize, logMean, antiRing, changeThreshPercent, db, detectDivergence);
}
//save deconvolved timepoints in one TIFF
run("Image Sequence...", "open=["+ saveDir + "decon-WPL_" + baseFilename + "-t000000000.tif] number=999 starting=1 increment=1 scale=100 file=decon-WPL_" + baseFilename + "-t sort");
run("Stack to Hyperstack...", "order=xyczt(default) channels=1 slices=" + numZslices + " frames=" + numTimepoints + " display=Grayscale");
run("Properties...", "channels=1 slices=" + numZslices + " frames=" + numTimepoints + " unit=um pixel_width=" + xyScaling + " pixel_height=" + xyScaling + " voxel_depth=" + zScaling + " frame=[" + timeInterval + " sec] origin=0,0");
saveAs("tiff", saveDir + "decon-WPL_" + baseFilename + ".tif");
close();
getDateAndTime(year2, month2, dayOfWeek2, dayOfMonth2, hour2, minute2, second2, msec);
print("Ended " + month2 + "/" + dayOfMonth2 + "/" + year2 + " " + hour2 + ":" + minute2 + ":" + second2);

ImageJ插件并行迭代反卷积的网站在这里:https://sites.google.com/site/piotrwendykier/software/deconvolution/paralleliterativedeconvolution

这是我用来将作业提交到集群的PBS脚本,使用以下命令:"qsub -l walltime=24:00:00,nodes=1:ppn=32 -q largemem ./PID3.pbs"。我本可以要求最多 40 ppn,但程序规定它们必须是 2 的幂。

#PBS -S /bin/bash
#PBS -V
#PBS -N PID_Test
#PBS -k n
#PBS -r n
#PBS -m abe
Xvfb :566 &
export DISPLAY=:566.0 &&
cd /tmp &&
mkdir -p /tmp/runfiles /tmp/images &&
cp /home/rcf-proj/met1/pid1/runfiles/* /tmp/runfiles/ &&
cp /home/rcf-proj/met1/pid1/images/*.tif /tmp/images/ &&
java -Xms512G -Xmx512G -Dplugins.dir=/home/rcf-proj/met1/software/fiji/Fiji.app/plugins/ -cp /home/rcf-proj/met1/software/imagej/ij.jar -jar /home/rcf-proj/met1/software/imagej/ij.jar -macro /tmp/runfiles/iterative_parallel_deconvolution.ijm -batch &&
tar czf /tmp/PIDTest.tar.gz /tmp/images &&
cp /tmp/PIDTest.tar.gz /home/rcf-proj/met1/output/ &&
rm -rf /tmp/images &&
rm -rf /tmp/runfiles &&
exit

我们必须使用 Xvfb 来防止 ImageJ 将图像发送到显示编号是任意的非假显示器。该程序运行了六个小时,但没有输出图像,是因为我需要打开图像吗?

我想重新设计这个宏,以便我可以拆分每个时间点并将其发送到它自己的节点进行处理。如果您对如何做到这一点有任何想法,我们将非常感谢您的反馈。唯一需要注意的是,我们必须将并行迭代反卷积软件插件与 ImageJ 一起使用

谢谢!

关于Xvfb的使用,如果您使用的是斐济的 ImageJ-launcher(在您的情况下很可能ImageJ-linux64),您可以使用 --headless 选项来处理嵌入在 ImageJ 中的所有 GUI 调用,并且已经过许多人在集群环境中运行 ImageJ 的测试。

这样,您还可以从查看由例如 IJ.log()宏中调用时,我不确定您调用 ImageJ 的方式是否如此。

您也可以考虑在宏的开头放置一个 setBatchMode(true),但我不太确定这在--headless模式下运行时是否有任何区别。有关详细信息,请参阅示例 BatchModeTest.txt。

当您打算在集群上运行这些东西时,可能值得查看 wiki 中的斐济群岛页面,该页面提供了许多详细信息并提示如何实现这一目标。

干杯~妮 可