diff --git a/docs/index.html b/docs/index.html index 93cb202..6a572e9 100644 --- a/docs/index.html +++ b/docs/index.html @@ -36,7 +36,7 @@


-Visual Med-Alpaca: Bridging Modalities in Biomedical Language Models +Visual Med-Alpaca: A Parameter-Efficient Biomedical LLM with Visual Capabilities @@ -71,7 +71,7 @@
- +
Visual Med-Alpaca is an open-source, multi-modal foundation model specifically for the biomedical domain, built on the LLaMa-7B. With a few hours of instruct-tuning and plug-and-play visual modules, it was designed to perform a range of tasks, from reading radiological images and answering complex clinical questions, while being easily deployable and replicable with a single gaming GPU.
Introducing Visual Med-Alpaca, an open-source, parameter-efficient biomedical foundation model that can be integrated with medical "visual experts" for multimodal biomedical tasks. Built upon the LLaMa-7B architecture (Touvron et al., 2023), this model is trained using an instruction set curated collaboratively by GPT-3.5-Turbo and human experts. Leveraging a few hours of instruction-tuning and the inclusion of plug-and-play visual modules, Visual Med-Alpaca can perform a diverse range of tasks, from interpreting radiological images to addressing complex clinical inquiries. The model can be replicated with ease, necessitating only a single consumer GPU.