There are two ways to run LocalScore. The easiest way to get started is to download one of the Official Models. If you have .gguf models already you run LocalScore with them.
Select the benchmark you want to run:
Open your terminal and run:
3.
localscore-0.9.2.exe -m Llama-3.2-1B-Instruct-Q4_K_M.gguf
curl -OL https://localscore.ai/download/localscore-tiny chmod +x localscore-tiny .
7 Comments
jborichevskiy
Congrats on launching!
Stoked to have this dataset out in the open. I submitted a bunch of tests for some models I'm experimenting with on my M4 Pro. Rather paltry scores compared to having a dedicated GPU but I'm excited that running a 24B model locally is actually feasible at this point.
mentalgear
Congrats on the effort – the local-first / private space needs more performant AI, and AI in general needs more comparable and trustworthy benchmarks.
Notes:
– Olama integration would be nice
– Is there an anonymous federated score sharing?
That way, users you approximate a model's performance before downloading it.
alchemist1e9
Really awesome project!
Clicking on GPU is a nice simple visualization. I was thinking maybe try to put that type of visual representation intuitively accessible immediately on the landing page.
cpubenchmark.net could he an example technique of drawing the site visitor into the paradigm.
roxolotl
This is super cool. I finally just upgraded my desktop and one thing I’m curious to do with it is run local models. Of course the ram is late so I’ve been googling trying to get an idea of what I could expect and there’s not much out there to compare to unless you’re running state of the art stuff.
I’ll make sure to run contribute my benchmark to this once my ram comes in.
jsatok
Contributed scores for the M3 Ultra 512 GB unified memory: https://www.localscore.ai/accelerator/404
Happy to test larger models that utilize the memory capacity if helpful.
ftbsqcfjm
Interesting approach to making local recommendations more personalized and relevant. I'm curious about the cold start problem for new users and how the platform handles privacy. Partnering with local businesses to augment data could be a smart move. Will be watching to see how this develops!
omneity
This is great, congrats for launching!
A couple of ideas .. I would like to benchmark a remote headless server, as well as different methods to run the LLM (vllm vs tgi vs llama.cpp …) on my local machine, and in this case llamafile is quite limiting. Connecting over an OpenAI-like API instead would be great!