mirror of
https://github.com/RYDE-WORK/visual-med-alpaca.git
synced 2026-01-19 14:28:49 +08:00
Update index.html
This commit is contained in:
parent
58099c1c73
commit
5ec6d5b036
@ -73,6 +73,7 @@
|
|||||||
</tbody>
|
</tbody>
|
||||||
<tr><td class="caption"> Introducing <a href="https://github.com/cambridgeltl/visual-med-alpaca"><strong>Visual Med-Alpaca</strong></a>, an open-source, parameter-efficient biomedical foundation model that can be integrated with medical "visual experts" for multimodal biomedical tasks. Built upon the <a href="https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/">LLaMa-7B</a> architecture (<a href="https://arxiv.org/abs/2302.13971">Touvron et al., 2023</a>), this model is trained using an instruction set curated collaboratively by GPT-3.5-Turbo and human experts. Leveraging a few hours of instruction-tuning and the inclusion of plug-and-play visual modules, Visual Med-Alpaca can perform a diverse range of tasks, from interpreting radiological images to addressing complex clinical inquiries. The model can be replicated with ease, necessitating only a single consumer GPU. </td></tr>
|
<tr><td class="caption"> Introducing <a href="https://github.com/cambridgeltl/visual-med-alpaca"><strong>Visual Med-Alpaca</strong></a>, an open-source, parameter-efficient biomedical foundation model that can be integrated with medical "visual experts" for multimodal biomedical tasks. Built upon the <a href="https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/">LLaMa-7B</a> architecture (<a href="https://arxiv.org/abs/2302.13971">Touvron et al., 2023</a>), this model is trained using an instruction set curated collaboratively by GPT-3.5-Turbo and human experts. Leveraging a few hours of instruction-tuning and the inclusion of plug-and-play visual modules, Visual Med-Alpaca can perform a diverse range of tasks, from interpreting radiological images to addressing complex clinical inquiries. The model can be replicated with ease, necessitating only a single consumer GPU. </td></tr>
|
||||||
<br></br>
|
<br></br>
|
||||||
|
<br></br>
|
||||||
<tr><td class="caption"> Refer to our <a href="https://github.com/cambridgeltl/visual-med-alpaca"><strong>Official Github Repo</strong></a> for code and data.</td></tr>
|
<tr><td class="caption"> Refer to our <a href="https://github.com/cambridgeltl/visual-med-alpaca"><strong>Official Github Repo</strong></a> for code and data.</td></tr>
|
||||||
</tbody></table>
|
</tbody></table>
|
||||||
<br>
|
<br>
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user