Visual Med-Alpaca: Bridging Modalities in Biomedical Language Models
Chang Shu1* Baian Chen2* Fangyu Liu1 Zihao Fu1 Ehsan Shareghi 3 Nigel Collier1
1University of Cambridge      2Ruiping Health      3University of Monash

Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B. With a few hours of instruct-tuning and plug-and-play visual modules, it can perform a range of tasks from reading radiological images and answering complex clinical questions, while being easily deployable and replicable with a single gaming GPU.

Demo (insert GIF here) (Baian)

Please register for Hugging Face and fill out this form [link] to access the online demo of Visual Med-Alpaca. Warning: Only for academic usage and do not apply it to real clinical scenarios!
Overview

Domain-specific foundation models are extremely useful in the biomedical domain as biomedical text is highly specialized and contains many domain-specific terms and concepts that are not present in general domain text corpora such as Wikipedia and Books. Pre-training on large volumes of biomedical text has shown to improve the performance of language models on several biomedical text mining tasks when compared to existing publicly available biomedical PLMs. However, to the best of our knowldege, there is not exisiting multimodal foundationmodel Therefore, we develop the Visual Med-Alpaca,


Resources:

    We apologize for the inconvenience, but this project is currently undergoing internal ethical screening at Cambridge University. We anticipate releasing the following assets within the next 1-2 weeks. You are more than welcome to Join Our Waitlist, and we'll notify you as soon as they become available.

  • Data: Github
  • Data Generation: Github
  • Visual Adaptation: Github
  • Training Code: Github
  • Demo: Huggingface Space
Model Architecture and Training Recipe

Overview of the model architecture and training procedure.
Domain Adaptation: Self-Instruct in Biomedical Domain (Baian)

How to generate the instruct-tuning set
Visual Adaptation: Deplot and Medical VQA (Baian)

We also build a large-scale, high-quality video dataset, Vimeo90K. This dataset consists of 89,800 video clips downloaded from vimeo.com, which covers large variaty of scenes and actions. It is designed for the following four video processing tasks: temporal frame interpolation, video denoising, video deblocking, and video super-resolution.

Sampled Frames (Full-resolution samples are here):



The list of original videos

The list of all full-length original videos can be found here, and youtube-dl can be used to batch download them. We reused some of utilities by AoT Dataset for scene detection/camera stabilization to generate these video clips and please refer to this repository for more details.

We further process these 89,800 video clips to generate the following two subsets.

Triplet dataset (for temporal frame interpolation):

The triplet dataset consists of 73,171 3-frame sequences with a fixed resolution of 448 x 256, extracted from 15K selected video clips from Vimeo-90K. This dataset is designed for temporal frame interpolation. Download links are
  • Testing set only (17GB): zip
  • Both training and test set (33GB): zip
Septuplet dataset (for video denoising, deblocking, and super-resoluttion):

Notice: we have recently updated our testing denoising dataset to fix a bug in denoising test data generation. The new quantitative result of our algorithm is reported in our updated paper

The septuplet dataset consists of 91,701 7-frame sequences with fixed resolution 448 x 256, extracted from 39K selected video clips from Vimeo-90K. This dataset is designed to video denoising, deblocking, and super-resolution.
  • The test set for video denoising (16GB): zip
  • The test set for video deblocking (11GB): zip
  • The test set for video super-resolution (6GB): zip
  • The original test set (not downsampled or downgraded by noise) (15GB): zip
  • The original training + test set (82GB): zip
Implementation Details

Hyper-parameter Training time
Comparison with Other Methods

Compare with ChatGPT / Alpaca / Galactica
Future Work

Compare with ChatGPT / Alpaca / Galactica
Limitations

Visual Med-Alpaca, is intended for academic research purposes only. Any commercial or clinical use of the model is strictly prohibited. This decision is based on the non-commercial license inherited from LLaMA, on which the model is built. Additionally, Visual Med-Alpaca is not legally approved for medical use in any country. Users should be aware of the model's limitations in terms of medical knowledge and the possibility of misinformation. Therefore, any reliance on Visual Med-Alpaca for medical decision-making is at the user's own risk.

Note: The developers and owners of the model, the Language Technology Lab at Cambridge University, do not assume any liability for the accuracy or completeness of the information provided by Visual Med-Alpaca, nor will they be responsible for any potential harm caused by the misuse of the model.
Acknowledgement

We are deeply grateful for the contributions made by open-source projects: LLaMA, Stanford Alpaca, Alpaca-LoRA, Deplot, BigBio, ROCO, Visual-ChatGPT, GenerativeImage2Text.