diff --git a/docs/files/style.css b/docs/files/style.css index c4bbdb9..473c4f6 100644 --- a/docs/files/style.css +++ b/docs/files/style.css @@ -49,7 +49,7 @@ body { } .title { - font-size: 150%; + font-size: 180%; font-weight: bold; display: block; text-align: center; diff --git a/docs/files/teaser.jpg b/docs/files/teaser.jpg deleted file mode 100644 index fceabc6..0000000 Binary files a/docs/files/teaser.jpg and /dev/null differ diff --git a/docs/index.html b/docs/index.html index 2ed079b..eb0e449 100644 --- a/docs/index.html +++ b/docs/index.html @@ -1,7 +1,7 @@
-
| Chang Shu1* | +Chang Shu1* | Baian Chen2* | -Fangyu Liu1 | -Zihao Fu1 | -Ehsan Shareghi 1 | +Fangyu Liu1 | +Zihao Fu1 | +Ehsan Shareghi 3 | Nigel Collier1 |
![]() |
||
@@ -88,8 +91,20 @@ Please register for Hugging Face and fill out this form [link] to access the onl
-Introduction
-
We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. On our preliminary evaluation of single-turn instruction following, Alpaca behaves qualitatively similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce. +Overview ++ +Domain-specific foundation models are extremely useful in the biomedical domain as biomedical text is highly specialized and contains many domain-specific terms and concepts that are not present in general domain text corpora such as Wikipedia and Books. Pre-training on large volumes of biomedical text has shown to improve the performance of language models on several biomedical text mining tasks when compared to existing publicly available biomedical PLMs. + + +However, to the best of our knowldege, there is not exisiting multimodal foundationmodel + + +Therefore, we develop the Visual Med-Alpaca, + + + + -Assets released: +Resources:
- Overview: Model and Training Recipe
+ Model Architecture and Training Recipe
Overview of the model architecture and training procedure.
@@ -308,13 +340,6 @@ Hyper-parameter
Training time
-
- Evaluation and Known Limitations
-
-We evaluate
-Limited Human evaluation (Links Here)
-
-
Comparison with Other Methods
@@ -329,10 +354,28 @@ Compare with ChatGPT / Alpaca / Galactica
+
+ Limitations
+
+Visual Med-Alpaca, is intended for academic research purposes only. Any commercial or clinical use of the model is strictly prohibited. This decision is based on the non-commercial license inherited from LLaMA, on which the model is built. Additionally, Visual Med-Alpaca is not legally approved for medical use in any country. Users should be aware of the model's limitations in terms of medical knowledge and the possibility of misinformation. Therefore, any reliance on Visual Med-Alpaca for medical decision-making is at the user's own risk.
+
+Note: The developers and owners of the model, the Language Technology Lab at Cambridge University, do not assume any liability for the accuracy or completeness of the information provided by Visual Med-Alpaca, nor will they be responsible for any potential harm caused by the misuse of the model.
+
+
+
+
Acknowledgement
-Thanks to
+We are deeply grateful for the contributions made by open-source projects:
+LLaMA,
+Stanford Alpaca,
+Alpaca-LoRA,
+Deplot,
+BigBio,
+ROCO,
+Visual-ChatGPT,
+GenerativeImage2Text.
|