You're currently viewing an old version of this dataset. To see the current version, click here.

RoentGen: Vision-Language Foundation Model for Chest X-ray Generation

Multimodal models trained on large natural image-text pair datasets have exhibited astounding abilities in gener-ating high-quality images. Medical imaging data is fundamentally different to natural images, and the language used to succinctly capture relevant details in medical data uses a different, narrow but semantically rich, domain-specific vocabulary.

Data and Resources

This dataset has no data

Cite this as

Pierre Chambon, Christian Bluethgen, Jean-Benoit Delbrouck, Rogier Van der Sluijs, Małgorzata Połacin, Juan Manuel Zambrano Chaves, Tanishq Mathew Abraham, Shivanshu Purohit, Curtis P. Langlotz, Akshay Chaudhari (2024). Dataset: RoentGen: Vision-Language Foundation Model for Chest X-ray Generation. https://doi.org/10.57702/4gsh3dkb

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Pierre Chambon
More Authors
Christian Bluethgen
Jean-Benoit Delbrouck
Rogier Van der Sluijs
Małgorzata Połacin
Juan Manuel Zambrano Chaves
Tanishq Mathew Abraham
Shivanshu Purohit
Curtis P. Langlotz
Akshay Chaudhari
Homepage https://arxiv.org/abs/2210.04133