blank nifti header
Hi @ibrahimhamamci , @sezginer ,
I've managed to download the entire dataset now 🙂 thanks a lot for your help! When I load the images into a nifti viewer (e.g. mitkworkbench), the image is incorrectly displayed as the information required is not in the nifti header but in the metadata.csv file.
Crucial info missing: Spacing, Origin, Direction, pixel rescaling (scl_slope, scl_inter), dtype (float to int16).
Once these are addressed, the image is correctly displayed.
I was wondering if there was a specific reason for not including these metadata directly into the NIFTI headers in the initial release? I've created a script that automatically adds the needed metadata back to the NIFTI headers. Also, storing it as int16s instead of floats reduces the dataset footprint by two (4bytes per float vs 2bytes per int16). If you're interested, I'd love to share it through a pull request, please let me know if this fits into your plans and how I can help out! 🙂
Hi @farrell236 ,
Thank you very much! We actually extracted images from the PACS in DICOM format and directly converted them to .npz format. The reason for this was because we also trained the CT-Net model as a baseline in our paper, which uses .npz images in the dataloader (and we did not want to change anything about the baseline). However, NPZ format takes up much more space compared to .nii.gz images. Therefore, we decided to convert them to .nii.gz format before pushing them to the Hugging Face repository, and then converted them to .npz format in the data preprocessing script (see the GitHub repo for this). As .npz images cannot contain metadata, we have a separate metadata CSV file for the dataset which is used in the dataloader. We actually never used an external .nii.gz viewer except for ImageJ, which also does not require any metadata. However, I can see that it makes sense to include necessary metadata in nii.gz images for visualization purposes.
Regarding the int16 format, it makes sense to include volumes in int16 format as it requires less space. I am happy to check if it affects anything in the CT-Clip codebase or changes output metrics (though this is very unlikely) if you can send the PR!
ahhh ok 🙂 I would suggest to use dcm2niix
to convert the DICOMs to nifti. This way it does the anonymization whilst preserving important metadata in the nifti header (pixel spacing, slice thickness, etc...). You can also use SimpleITK directly pytorch dataloaders to load the .nii.gz files. I'll update with a PR shortly.
Hi @sezginer ,
thank you again for your help with downloading the dataset and for clarifying your data curation process. Unfortunately, while going through the volumes, I came across additional issues, which I believe stem from the indirect route of dicom to npz to nifti. -- essentially, some of the metadata in the csv file does not seem to match the nifti volume. For example, in the image below (valid_1080_a_1):
original:
fixed:
you can clearly see that even after fixing the pixel spacing information in the nifti header with the information provided in the metadata.csv, the image is still squished. This is indicative that the information in the metadata.csv is incorrect. We think it is likely that the study, which the volume came from, had multiple reconstructions, e.g. with both thick and thin slice volumes, so the actual image is a thick slice volume but the metadata corresponds to a thin slice volume. this also happens in other volumes e.g. valid_1_a_1, valid_10_a_1, valid_11_a_2, valid_102_a_1, valid_103_a_2...
This is a big problem because networks such as nnunet uses the spacing information in their processing/training pipeline. if this is incorrect then its not possible to use nnunet at all.
Would you be able to run dcm2niix on the original dicoms volumes?
We would he happy to assist you (online or offline) in any way possible regarding this issue
Hi @farrell236 , thank you very much for bringing this to our attention. I believe this is caused by a problem in anonymization before we publish the dataset (probably we add the same Z spacings to each reconstruction). You are right that there are some CT volumes that have two different reconstructions with 2 different Z spacings (specifically HRCT when it has a 1024 x 1024 reconstruction, it has half the number of slices). I am pretty sure we have the correct Z spacings in the non-anonymized data, so I will just fix the metadata with the correct spacings. I will update the xlsx file with the correct spacings hopefully today. It is still strange though that it does not work for 768 x 768 images like valid_1080_a_1. I will check this quickly as well. In the meantime, if you don't want to wait, you can manually change the values as the Z spacing for 1024 x 1024 should be 1.5 and for 512 x 512 should be 0.75 when the series description is HRCT. This should fix the problem temporarily for you, at least for most of the data (again, I am not sure about 768 x 768 images).
I do not think dcm2niix works in that case, as the Slice Thickness attribute in our CT machines is not correctly added to the metadata for some reason. We actually calculated them (I mean the Z spacings) manually for each DICOM series for that reason. If this is still a problem after I corrected the Z spacings, I can still try to run it on the raw data, which we still have, but it might take a couple of days as the drive that I store them to is currently in the hospital.
Please let me know if that makes sense to you!
thanks @sezginer for checking this, i'll have a look at it too. since slice thickness and slice spacings are different metadata variables, dcm2niix computes the spacings between slices correctly and does not rely on the slice thickness attribute in the DICOM header.
If it's fixable by editing the xls then i think it would be good to attempt that first. you are right, running dcm2niix will take much longer and probably wouldn't be as necessary if the issue is already fixed. 🙂
Hi @farrell236 ,
I believe I have identified the issue and will promptly address it. It appears that we generated metadata xlsx file for accession numbers instead of reconstructions for the z spacing part. Consequently, the xlsx file contains multiple z-diff values for the same accession number, with the anonymization code selecting the one at the 0th index. For instance, for BGC2538395, there are values of 0.75 and 1.5, and the code currently chooses 0.75 before anonymizing it. It is relatively easy to map for 512 and 1024 reconstruction ones (bigger to 1024 and smaller to 512). Interestingly, there are also two different z-spacing values for 768 x 768 images (I still do not have any idea why though as there is only 1 reconstruction for 768 px resolution scans). However, I believe that the larger one is correct for this one as well, though I will verify this. I have spacing values calculated for each reconstruction stored at the disk in the hospital, which I will double-check (most likely tomorrow). But for now I will send the temporary fixed XLSX file.
Thank you once again for bringing this to our attention!
I will start running the dcm2niix on the raw data anyway to see if there are any inconsistencies.
Hi @farrell236 , I have updated the z spacings in metadata CSV files from our unanonymized dataset. I've also tested the examples you sent, and everything seems to be working fine. Sorry for the delay; I had to go to the hospital to get the drive. Hope this helps!
Many thanks @sezginer ! I will run the script (the one in the open PR) and check again also 🙂
Yes, I will merge it as well.
Hi @ibrahimhamamci , @sezginer ,
I've managed to download the entire dataset now 🙂 thanks a lot for your help! When I load the images into a nifti viewer (e.g. mitkworkbench), the image is incorrectly displayed as the information required is not in the nifti header but in the metadata.csv file.
Crucial info missing: Spacing, Origin, Direction, pixel rescaling (scl_slope, scl_inter), dtype (float to int16).
Once these are addressed, the image is correctly displayed.
I was wondering if there was a specific reason for not including these metadata directly into the NIFTI headers in the initial release? I've created a script that automatically adds the needed metadata back to the NIFTI headers. Also, storing it as int16s instead of floats reduces the dataset footprint by two (4bytes per float vs 2bytes per int16). If you're interested, I'd love to share it through a pull request, please let me know if this fits into your plans and how I can help out! 🙂
How big is the dataset? Can it be downloaded to a Colab notebook? I'm looking to use this dataset for a downstream task but I can't load the .gz files in Colab. I can only access the reports, labels etc
Hi @Adeptschneider ,
Are you using load_dataset_builder
? I have removed volumes from the dataset builder due to the lack of support for nii.gz images. You can find more details in this discussion: Dataset Builder Issue.
The dataset size is approximately 12 TB if you use the scripts I have sent here. If you use git clone, it will take 2x space. If you set local_dir_use_symlinks=False
, it will double the required storage space as well as it copies nii.gz files into the local directory along with .cache. So I don't recommend them if you have limited space. See more info in our discussion here.
How much space do you have available in your Colab? Is it the default 100GB? Additionally, do you require the entire dataset at the same time, or will you be working with the images individually? If the latter is the case, you can download and delete the images as needed. Just be careful that they are not stored in the local folder but rather in the .cache directory under "/root/.cache/huggingface". hf_hub_download will create a symbolic link in your local folder to the .cache. This is the case if you do not set local_dir_use_symlinks. If you set it false it copies both to .cache and local dir (as I mentioned 2x space).
You can identify the link destination and remove both link and the destination using Python's os
module. Such as:
import os
target_path = os.readlink("data_volumes/dataset/valid/valid_1/valid_1_a/valid_1_a_1.nii.gz")
os.remove("data_volumes/dataset/valid/valid_1/valid_1_a/valid_1_a_1.nii.gz")
os.remove(target_path)
For situations where space is limited, you can use something like this:
from huggingface_hub import hf_hub_download
import pandas as pd
import os
repo_id = "ibrahimhamamci/CT-RATE"
directory_name = "dataset/valid/"
data=pd.read_csv("valid_labels.csv")
for name in data["VolumeName"]:
folder1 = name.split("_")[0]
folder2 = name.split("_")[1]
folder = folder1 + "_" + folder2
folder3 = name.split("_")[2]
subfolder = folder + "_" + folder3
subfolder = directory_name + folder + "/" + subfolder
hf_hub_download(repo_id=repo_id, repo_type="dataset", token="your_token_here", subfolder=subfolder, filename = name, local_dir = "data_volumes")
##do your stuff here##
target_path = os.readlink("data_volumes/"+ subfolder + "/" + name)
os.remove("data_volumes/"+ subfolder + "/" + name)
os.remove(target_path)
This should work with Colab. Let me know if you have any questions or need further assistance!
To load the images after you download them you can use nibabel library:
import nibabel as nib
img = nib.load('path_to_your_downloaded_image.nii.gz')
data = img.get_fdata()
This will read the nii.gz data as a numpy array. Please make sure you are correctly preprocessing the input for spacings and HU values. Necessary information are given in the metadata csv files. You can find the preprocessing scripts for our case (CT-CLIP) in the GitHub: here
This adjust the x-y-z spacings to be 0.75, 0.75, 1.5 (you can change it as you wish), and also converts pixel values to HU.
Hello @farrell236 , Could you check the new spacings. I will close the issue if you are OK with them. Thank you very much!
Hi @sezginer ! Thanks for the updated metadata 🙂 I've fixed the nifti spacings using the new data and they seem to be correct (at least for the ones noted before). I'd like to do a thorough examination for all volumes but not sure how to go about it yet. Manual verification for all is quite challenging due to the amount of scans to inspect, and haven't yet found an automatic solution.
I think as the bug is resolved, perhaps close the issue for now and reopen if something else related arises?
I haven’t closed it just yet because I've also found some minor NITs, e.g. orientation for valid_96_b_2 is flipped in z, valid_208_a_1, valid_208_a_2, valid_208_a_4, valid_257_a_4, valid_340_a_1, valid_340_a_2, valid_340_a_4 are a head CTs. Is the dataset intended to be exclusively chest CTs? If it includes random head CTs (for deep learning purposes), this could act as a regularization method, similar to dropout, where an incorrect data point can help jostle the model out of local minima.
Hi @farrell236 , thank you very much!
Yes, checking them manually is not feasible as there are over 50,000 images. It is even harder for our case as we do not have a user interface in our cluster 🙂.
It is intended to be exclusively chest CTs, but there might occasionally be head CT images when the imaging is full body. We have cleaned it as much as we can with the information in metadata, but there might be some remaining. However, it is not a big problem as you mentioned 🙂. I still can remove the entries with more than 4 reconstruction though.
We have also corrected the flipped Z spaces as much as we could from the metadata, but there might be some remaining as well. Let me know if you have any idea how to check that further automatically. I think it is not feasible to check every data point manually as it will take too much time, and since they are extremely rare for now after we corrected most of the time, I don't think it will change anything by means of the models though.
We can still keep this open as we will have another discussion with somebody about the orientation.