From 2d49088f49da9da7f175165598a1f4beed75961f Mon Sep 17 00:00:00 2001 From: "Baian Chen (Andrew)" Date: Tue, 18 Apr 2023 00:11:33 +0800 Subject: [PATCH] Update index.html --- docs/index.html | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/docs/index.html b/docs/index.html index 78b390e..561c4d0 100644 --- a/docs/index.html +++ b/docs/index.html @@ -72,9 +72,7 @@ -

-

- +
Introducing Visual Med-Alpaca, an open-source, parameter-efficient biomedical foundation model that can be integrated with medical "visual experts" for multimodal biomedical tasks. Built upon the LLaMa-7B architecture (Touvron et al., 2023), this model is trained using an instruction set curated collaboratively by GPT-3.5-Turbo and human experts. Leveraging a few hours of instruction-tuning and the inclusion of plug-and-play visual modules, Visual Med-Alpaca can perform a diverse range of tasks, from interpreting radiological images to addressing complex clinical inquiries. The model can be replicated with ease, necessitating only a single consumer GPU.
Refer to our Official Github Repo for code and data.

Refer to our Official Github Repo for code and data.