This is a post about the R port of llama2.c, a C implementation of a language model. The post includes installation instructions, code examples, and steps to run a Shiny app for experimenting with the model.

Code available at: https://github.com/thierrymoudiki/llama2r/tree/main

R port of llama2.c (https://github.com/karpathy/llama2.c)

Code and Shiny app for educational purpose. Experiment with temperature.

image-title-here

Install

devtools::install_github("thierrymoudiki/llama2r")

Code example in vignettes/getting-started.Rmd.

Shiny app

In /vignettes/app.R

Reproducible steps

Step 1: Prepare the C Code

Clone the Repository:

git clone https://github.com/karpathy/llama2.c.git
cd llama2.c

Step 2: Compile the C Code:

gcc -Ofast run.c -lm -o run

Step 3: Download a Pretrained Model:

wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin

Step 4: Download the Tokenizer:

wget https://huggingface.co/karpathy/tinyllamas/resolve/main/tokenizer.bin

Step 5: Packaging

The .bins are stored in inst/bin

Step 6: Run the Shiny App

library(shiny)
library(llama2r)
runApp(system.file("vignettes", "app.R", package = "llama2r"))

Comments powered by Talkyard.