Abstract: This dataset includes 3D µCT images of nine different specimen of 10 mm \times 10 mm of a carbon fiber reinforced polyamide 6 plaque produced in the long fiber reinforced thermoplastic direct (LFT-D) process. The position of the specimen in the plaque can be learned from the referenced publication (Blarr et al., Implementation and comparison of algebraic and machine learning based tensor interpolation methods applied to fiber orientation tensor fields obtained from CT images, Computational Materials Science, 2022). After small pre-processing steps, the fiber orientation tensor of each of the image stacks is determined with the help of the structure tensor based implementation of Pinter et al. The code can be found here: https://sourceforge.net/p/composight/code/HEAD/tree/trunk/SiOTo/StructureTensorOrientation/FibreOrientation/StructureTensorOrientationFilter.cxx#l186. Hence, nine .dat-files containing the fiber orientation tensor of second order are also included in this dataset.
Most importantly, this dataset contains three different Python codes. The author implemented a different interpolation method in each of those codes; two algebraic and one machine learning based one. The component averaging method is the simplest; the decomposition method is mathematically more difficult. It works with the decomposition of the tensor into shape and orientation and subsequent separate invariant and quaternion weighting, before reassembling the then interpolated tensor. The deep learning based method is the only Jupyter notebook in this dataset, where an ANN is implemented for the same interpolation task. Please consider the reference paper mentioned before for details.
For the visualization of the tensor glyphs, a Matlab function by Barmpoutis is used, which can be found here: https://de.mathworks.com/matlabcentral/fileexchange/27462-diffusion-tensor-field-dti-visualization.
TechnicalRemarks: In the folder "code" there are three Python scripts. The "component_averaging_method.py" and the "decomposition_method.py" work the same: The script needs an input .txt-file with coordinates and the corresponding fiber orientation tensors (the example used in the publication is given (file "Input_file_FOT.txt")). After running the code you are asked in the console for the name of the output file and for lower and upper x and y limit, which are 1 and 13, respectively, in the given case. The scripts then calculate the fiber orientation tensors at all missing positions with the respective method, which are then written into a MATLAB file (which is named the way you input in the console). This MATLAB file is structured in a way that the fiber orientation tensors can be plotted directly with the tensor glyph visualization function of Barmpoutis ("plotDTI") given in the abstract.
The Jupyter Notebook "ANN_method.ipynb" works a bit differently as it is an artificial neural network. There, .csv-files are needed as input data. The components of the tensors are given to the network in separate files and the coordinates of the positions in another separate .csv-file. This is all documented in the paper as well. The output again is a .csv-file that has to be transferred into MATLAB if users want to use the same visualization function.
The folder "scans_and_FOT" includes all nine scans and respective fiber orientation tensors used for the publication. The scans are given as .mhd- and .raw-files, the orientation tensors are given in the .dat-files. To generate the fiber orientation tensors from the images, the code by Pinter et al., which is given in the abstract, was used. This C++ code writes out a vector valued image with the orientations per voxel. From this, again with another MATLAB file, which composes the orientation tensor from the vector-valued image, these .dat files can be generated. As this is not the main focus of the publication, and the functionality of the python scripts can be verified with the given orientation tensors, this MATLAB script is not part of this dataset.
Please consider the paper or contact the author Juliane Blarr for further questions.