mirror of
https://github.com/RYDE-WORK/visual-med-alpaca.git
synced 2026-01-19 14:28:49 +08:00
Update index.html
This commit is contained in:
parent
aba80337b3
commit
04325486c1
@ -36,7 +36,7 @@
|
|||||||
<!-- <span class="venue">Conference name</span> -->
|
<!-- <span class="venue">Conference name</span> -->
|
||||||
<td><center><img src="files/ltl_logo.png" width="1000" ></center></td>
|
<td><center><img src="files/ltl_logo.png" width="1000" ></center></td>
|
||||||
<br><br>
|
<br><br>
|
||||||
<span class="title">Visual Med-Alpaca: Bridging Modalities in Biomedical Language Models</span>
|
<span class="title">Visual Med-Alpaca: A Parameter-Efficient Biomedical LLM with Visual Capabilities</span>
|
||||||
|
|
||||||
<table align="center" border="0" width="1000" class="authors">
|
<table align="center" border="0" width="1000" class="authors">
|
||||||
<tbody><tr>
|
<tbody><tr>
|
||||||
@ -71,7 +71,7 @@
|
|||||||
<tr><td>
|
<tr><td>
|
||||||
<table border="0">
|
<table border="0">
|
||||||
</tbody>
|
</tbody>
|
||||||
<tr><td class="caption"><a href="https://github.com/cambridgeltl/visual-med-alpaca"><b>Visual Med-Alpaca</b></a> is an open-source, multi-modal foundation model specifically for the biomedical domain, built on the <a href="https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/">LLaMa-7B</a>. With a few hours of instruct-tuning and plug-and-play visual modules, it was designed to perform a range of tasks, from reading radiological images and answering complex clinical questions, while being easily deployable and replicable with a single gaming GPU. </td></tr>
|
<tr><td class="caption"> Introducing <a href="https://github.com/cambridgeltl/visual-med-alpaca"><strong>Visual Med-Alpaca</strong></a>, an open-source, parameter-efficient biomedical foundation model that can be integrated with medical "visual experts" for multimodal biomedical tasks. Built upon the <a href="https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/">LLaMa-7B</a> architecture (<a href="https://arxiv.org/abs/2302.13971">Touvron et al., 2023</a>), this model is trained using an instruction set curated collaboratively by GPT-3.5-Turbo and human experts. Leveraging a few hours of instruction-tuning and the inclusion of plug-and-play visual modules, Visual Med-Alpaca can perform a diverse range of tasks, from interpreting radiological images to addressing complex clinical inquiries. The model can be replicated with ease, necessitating only a single consumer GPU. </td></tr>
|
||||||
</tbody></table>
|
</tbody></table>
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user