Skip to content Skip to footer
0 items - $0.00 0

Show HN: Penny-1.7B Irish Penny Journal style transfer by deepsquirrelnet

Show HN: Penny-1.7B Irish Penny Journal style transfer by deepsquirrelnet

Show HN: Penny-1.7B Irish Penny Journal style transfer by deepsquirrelnet

12 Comments

  • Post Author
    ekianjo
    Posted June 2, 2025 at 4:08 pm

    Nice work ! It still manage to use the word 'delve' in the first sentence, which is a giveaway that it's written by a LLM.

  • Post Author
    veggieroll
    Posted June 2, 2025 at 4:23 pm

    Have you written anywhere in detail on how you gathered your dataset and trained the finetune? I have a few use cases that are like this, but I'm not sure where to start.

  • Post Author
    bee_rider
    Posted June 2, 2025 at 4:34 pm

    It is sort of funny that the Irish ended up being the best practitioners of the English language, despite the fact that they were forced to use it.

  • Post Author
    _1
    Posted June 2, 2025 at 4:35 pm

    Kinda of strange to pick an example that is just wrong. It's supposed to be written from 1840 and says Paris is the seat of Napoleon almost 20 years after he died.

  • Post Author
    sjkoelle
    Posted June 2, 2025 at 4:39 pm

    Marvelous! What gain beyond zero-shot would motivate a humble citizen to implement this instrument? How was the superiority assessed?

  • Post Author
    kamranjon
    Posted June 2, 2025 at 4:47 pm

    This is really cool! Do you have any of the pipeline code available that you used for training? I am curious about how you created the reward model. I love little projects like this, thanks for sharing. I've been fine-tuning on my mac and an interested in getting into GRPO, which I haven't tried yet.

  • Post Author
    latchkey
    Posted June 2, 2025 at 4:57 pm
  • Post Author
    joshstrange
    Posted June 2, 2025 at 5:00 pm

    Now I'm just imagining a video game with characters each having their own fine tune applied on top for their dialog. I'm guessing you could use some relatively small models. In each case you would be feeding all the context to the model (player name, current relevant quests, summary of previous interactions, etc). Though maybe fine tuning/training isn't even needed and a good enough prompt will work (Not sure what all they used for this [0]). I'm excited for the first AAA game that tries this. Anyone that has played a RPG-style game knows that after a few times going into a city (or a couple play-throughs) the dialog feels repetitive. I love the idea of Skyrim but with better dialog. You could either run the models on the user's computer or maybe just run it on the backend so you can block certain generations (wrong/misleading/"unsafe") and just ship updated dialog lists to the client occasionally.

    [0] https://www.youtube.com/watch?v=d6sVWEu9HWU

  • Post Author
    fitsumbelay
    Posted June 2, 2025 at 5:35 pm

    this is awesome

  • Post Author
    KaiserPro
    Posted June 2, 2025 at 5:59 pm

    I'm not sure if you've tried this already, but removing the translate step might give you a more authentic output. In the journals that I saw, the language was much more simple than the output.

  • Post Author
    throwaway314155
    Posted June 2, 2025 at 6:29 pm

    You mention no supervised finetuning. May I ask why? I'm curious if you could get similar/better/worse results by just finetuning the LLM on your dataset rather than generating synthetic data, training a classifier and using GRPO?

    Cool stuff in any case.

  • Post Author
    npunt
    Posted June 2, 2025 at 7:33 pm

    Love it. Immediately reminded of the text filters back in the day like the pirate one that would drop letters and replace with apostrophes and change certain passages into "arr" or "yarr matey"

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.