diff --git a/README.md b/README.md
index 5699847..52c0024 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@
-# Visual Med-Alpaca: Bridging Modalities in Biomedical Language Models
+# Visual Med-Alpaca: Bridging Modalities in Biomedical Language Models [[BLOG](https://cambridgeltl.github.io/visual-med-alpaca/)]
[Chang Shu](https://ciaranshu.github.io)1\*, Baian Chen2\*, [Fangyu Liu](http://fangyuliu.mezihao)1, [Zihao Fu](https://fuzihaofzh.github.io)1, [Ehsan Shareghi](https://eehsan.github.io)3, [Nigel Collier](https://sites.google.com/site/nhcollier/home/)1
@@ -11,7 +11,7 @@
## Abstract
-[**Visual Med-Alpaca**](https://github.com/cambridgeltl/visual-med-alpaca) is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the [LLaMa-7B](https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/). With a few hours of instruct-tuning and plug-and-play visual modules, it can perform a range of tasks from reading radiological images and answering complex clinical questions, while being easily deployable and replicable with a single gaming GPU. [[BLOG](https://cambridgeltl.github.io/visual-med-alpaca/)]
+[**Visual Med-Alpaca**](https://github.com/cambridgeltl/visual-med-alpaca) is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the [LLaMa-7B](https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/). With a few hours of instruct-tuning and plug-and-play visual modules, it can perform a range of tasks from reading radiological images and answering complex clinical questions, while being easily deployable and replicable with a single gaming GPU.