| Chang Shu1* | Baian Chen2* | Fangyu Liu1 | Zihao Fu1 | Ehsan Shareghi 1 | Nigel Collier1 |
| 1University of Cambridge 2Ruiping Health |
![]() |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Demo (insert GIF here) (Baian)
Introduction
We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. On our preliminary evaluation of single-turn instruction following, Alpaca behaves qualitatively similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce. Assets released:
Overview: Model and Training Recipe
Overview of the model architecture and training procedure.
Domain Adaptation: Self-Instruct in Biomedical Domain (Baian)
How to generate the instruct-tuning set
Visual Adaptation: Deplot and Medical VQA (Baian)
We also build a large-scale, high-quality video dataset, Vimeo90K. This dataset consists of 89,800 video clips downloaded from vimeo.com, which covers large variaty of scenes and actions. It is designed for the following four video processing tasks: temporal frame interpolation, video denoising, video deblocking, and video super-resolution.
Sampled Frames (Full-resolution samples are here):
Implementation Details
Hyper-parameter
Training time
Evaluation and Known Limitations
We evaluate
Limited Human evaluation (Links Here)
Comparison with Other Methods
Compare with ChatGPT / Alpaca / Galactica
Future Work
Compare with ChatGPT / Alpaca / Galactica
Acknowledgement
Thanks to
|