Skip to content Skip to footer

Show HN: LocalScore – Local LLM Benchmark by sipjca

7 Comments

  • Post Author
    jborichevskiy
    Posted April 3, 2025 at 4:40 pm

    Congrats on launching!

    Stoked to have this dataset out in the open. I submitted a bunch of tests for some models I'm experimenting with on my M4 Pro. Rather paltry scores compared to having a dedicated GPU but I'm excited that running a 24B model locally is actually feasible at this point.

  • Post Author
    mentalgear
    Posted April 6, 2025 at 12:51 pm

    Congrats on the effort – the local-first / private space needs more performant AI, and AI in general needs more comparable and trustworthy benchmarks.

    Notes:
    – Olama integration would be nice
    – Is there an anonymous federated score sharing?
    That way, users you approximate a model's performance before downloading it.

  • Post Author
    alchemist1e9
    Posted April 6, 2025 at 1:55 pm

    Really awesome project!

    Clicking on GPU is a nice simple visualization. I was thinking maybe try to put that type of visual representation intuitively accessible immediately on the landing page.

    cpubenchmark.net could he an example technique of drawing the site visitor into the paradigm.

  • Post Author
    roxolotl
    Posted April 6, 2025 at 3:42 pm

    This is super cool. I finally just upgraded my desktop and one thing I’m curious to do with it is run local models. Of course the ram is late so I’ve been googling trying to get an idea of what I could expect and there’s not much out there to compare to unless you’re running state of the art stuff.

    I’ll make sure to run contribute my benchmark to this once my ram comes in.

  • Post Author
    jsatok
    Posted April 6, 2025 at 6:51 pm

    Contributed scores for the M3 Ultra 512 GB unified memory: https://www.localscore.ai/accelerator/404

    Happy to test larger models that utilize the memory capacity if helpful.

  • Post Author
    ftbsqcfjm
    Posted April 6, 2025 at 7:19 pm

    Interesting approach to making local recommendations more personalized and relevant. I'm curious about the cold start problem for new users and how the platform handles privacy. Partnering with local businesses to augment data could be a smart move. Will be watching to see how this develops!

  • Post Author
    omneity
    Posted April 6, 2025 at 10:33 pm

    This is great, congrats for launching!

    A couple of ideas .. I would like to benchmark a remote headless server, as well as different methods to run the LLM (vllm vs tgi vs llama.cpp …) on my local machine, and in this case llamafile is quite limiting. Connecting over an OpenAI-like API instead would be great!

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.