Skip to content Skip to footer
Modern-Day Oracles or Bullshit Machines by ctbergstrom

Modern-Day Oracles or Bullshit Machines by ctbergstrom

14 Comments

  • Post Author
    bertman
    Posted February 9, 2025 at 8:46 am

    Synopsis from the project's "instructor guide:

    >This is not a computer science course, nor even an information science course—though naturally it could be used in such programs.

    >Our aim is not to teach students the mechanics of how large language models work, nor even the best ways of using them in various technical capacities.

    >We view this as a course in the humanities, because it is a course about what it means to be human in a world where LLMs are becoming ubiquitous, and it is a course about how to live and thrive in such a world.

  • Post Author
    ctbergstrom
    Posted February 9, 2025 at 8:47 am

    Jevin West and I are professors of data science and biology, respectively, at the University of Washington. After talking to literally hundreds of educators, employers, researchers, and policymakers, we have spent the last eight months developing the course on large language models (LLMs) that we think every college freshman needs to take.

    https://thebullshitmachines.com

    This is not a computer science course; it’s a humanities course about how to learn and work and thrive in an AI world. Neither instructor nor students need a technical background. Our instructor guide provides a choice of activities for each lesson that will easily fill an hour-long class.

    The entire course is available freely online. Our 18 online lessons each take 5-10 minutes; each illuminates one core principle. They are suitable for self-study, but have been tailored for teaching in a flipped classroom.

    The course is a sequel of sorts to our course (and book) Calling Bullshit. We hope that like its predecessor, it will be widely adopted worldwide.

    Large language models are both powerful tools, and mindless—even dangerous—bullshit machines. We want students to explore how to resolve this dialectic. Our viewpoint is cautious, but not deflationary. We marvel at what LLMs can do and how amazing they can seem at times—but we also recognize the huge potential for abuse, we chafe at the excessive hype around their capabilities, and we worry about how they will change society. We don't think lecturing at students about right and wrong works nearly as well as letting students explore these issues for themselves, and the design of our course reflects this.

  • Post Author
    ArchitectAnon
    Posted February 9, 2025 at 8:51 am

    [flagged]

  • Post Author
    ThouYS
    Posted February 9, 2025 at 9:13 am

    they're quite useful for being "bullshit machines"

  • Post Author
    KronisLV
    Posted February 9, 2025 at 9:13 am

    This is a pretty admirable goal!

    I'm saying this unironically, but I wish there were courses on looking at information critically and more in how to have a healthy and safe life in the modern day world (including things like data security, how to deal with social media etc.) that would be taught to everyone in schools/colleges/universities.

    In my country, there are still public announcements about not trusting random people calling you, never giving your bank details to strangers (every bank homepage says that, that the employees will never ask for that stuff) and people regularly get scammed anyways, the only thing sort of saving them is that scamming is only scalable so far… until you throw automation in the mix, in addition to just plainly spreading misinformation about any topic, or even just allowing people to be confidently incorrect and eliminating the need for them to even think that much (e.g. students just asking ChatGPT to do their homework).

    Any step at least in the direction of educating people feels like a good thing.

    That said, I don't hate LLMs or anything, I use them for development more or less daily (lovely for boilerplate in your average enterprise Java codebase, for example) and recently saw this project, which made me happy: https://sites.google.com/view/eurollm/home

  • Post Author
    Karrot_Kream
    Posted February 9, 2025 at 9:24 am

    Great stuff! LLMs, social media, the information landscape has changed so much in the past decade. We need good pedagogical resources on how to think of these tools, both their benefits and their downsides.

  • Post Author
    mkarliner
    Posted February 9, 2025 at 9:25 am

    I wish I'd written this.
    Excellent.
    Everyone should read this

  • Post Author
    picafrost
    Posted February 9, 2025 at 9:25 am

    I think a great number of working professionals need a course like this too. I am already tired of ChatGPT being cited by the less experienced as an invisible expert in the room during technical discussions.

  • Post Author
    padolsey
    Posted February 9, 2025 at 9:37 am

    Is there a way to download and read this as a document instead of web pages? They're hard to navigate.

  • Post Author
    teknopaul
    Posted February 9, 2025 at 9:41 am

    LLMs pattern match, they say something that sounds good at this point but with no notion of correct.
    copilot is like pair programming with a loud pushy intern that has seen you write stuff before didn't understand it, but keeps suggesting what to do anyway.
    some medium sized chunks of code can be delegated but everyline it writes needs careful review.

    Crazy tech, but companies are just wring to be trying to use LLMs as any kind of source of truth.
    Even Google is blind enough to think that aí could be used for search results, which are memes they are soo bad. And they won't get better. They just become more convincing

  • Post Author
    abmmgb
    Posted February 9, 2025 at 9:46 am

    Your site looks cool! Nice topic!

    Some of them just try to predict the most likely next word.

    With reasoning and pause for thought they are becoming more capable.

    Most likely there is a big element of hype but the way you use them can make them really useful and accelerate your work.

    I recommend the CoIntelligent book for newbie like myself.

  • Post Author
    K0balt
    Posted February 9, 2025 at 9:59 am

    There is a bit of very important content missing from the explanation of the autocomplete analogy.

    The combination of encoding / tokenization of meanings and ideas, related concepts, and mapping these relationships in vector space makes LLMs not so much glorified text prediction engines as browsers/oracles of the sum total of cultural-linguistic knowledge as captured in the training corpus.

    Understanding how the implicit and explicit linguistic, memetic, and cultural context is integrated into the idea/concept/text prediction engine helps to show how LLMs produce such convincing output and why they often can bring useful information to the table.

    More importantly, understanding this holistically can help people to predict where the output that LLMs can generate will -not- be particularly useful or even may be wildly misleading.

  • Post Author
    neuronic
    Posted February 9, 2025 at 10:32 am

    > Moreover, a hallucination is a pathology. It's something that happens when systems are not working properly.

    > When an LLM fabricates a falsehood, that is not a malfunction at all. The machine is doing exactly what it has been designed to do: guess, and sound confident while doing it.

    > When LLMs get things wrong they aren't hallucinating. They are bullshitting.

    Very important distinction and again, shows the marketing bias to make these systems seem different than they are.

  • Post Author
    einrealist
    Posted February 9, 2025 at 10:34 am

    This website is so important!

    Now ask yourself why AI companies don't want to be regulated or scrutinized.

    So many companies (users and providers) jump on the AI hype train because of FOMO. The end result might be just as destructive as this mythical "AGI".

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.