Skip to content Skip to footer

7 Comments

  • Post Author
    jweir
    Posted April 25, 2025 at 3:31 am

    After years of looking at APL as some sort of magic I spent sometime earlier this year to learn it. It is amazing how much code you can fit into a tweet using APL. Fun but hard for me to write.

  • Post Author
    colbyn
    Posted April 25, 2025 at 3:31 am

    I really wish I finished my old Freeform note taking app that complies down to self contained webpages (via SVG).

    IMO it was a super cool idea for more technical content that’s common in STEM fields.

    Here’s an example from my old chemistry notes:

    https://colbyn.github.io/old-school-chem-notes/dev/chemistry…

  • Post Author
    jamesrom
    Posted April 25, 2025 at 4:38 am

    It's easy to think of notation like shell expansions, that all you're doing is replacing expressions with other expressions.

    But it goes much deeper than that. Once my professor explained how many great discoveries are often paired with new notation. That new notation signifies "here's a new way to think about this problem". And that many unsolved problems today will give way to powerful notation.

  • Post Author
    gitroom
    Posted April 25, 2025 at 5:31 am

    man i always try squishing code into tiny spaces too and then wonder why i'm tired after, but i kinda love those moments when it all just clicks

  • Post Author
    xelxebar
    Posted April 25, 2025 at 5:38 am

    > Subordination of detail

    The paper doesn't really explore this concept well, IMHO. However, after a lot of time reading and writing APL applications, I have found that it points at a way of managing complexity radically different from abstraction.

    We're inundated with abstraction barriers: APIs, libraries, modules, packages, interfaces, you name it. Consequences of this approach are almost cliché at this point—dizzyingly high abstraction towers, developers as just API-gluers, disconnect from underlying hardware, challenging to reason about performance, _etc._

    APL makes it really convenient to take a different tack. Instead of designing abstractions, we can carefully design our data to be easily operated on with simple expressions. Where you would normally see a library function or DSL term, this approach just uses primitives directly:

    For example, we can create a hash map of vector values and interred keys with something like

        str←(⊂'') 'rubber' 'baby' 'buggy' 'bumpers'             ⍝ string table
        k←4 1 2 2 4 3 4 3 4 4                                   ⍝ keys
        v←0.26 0.87 0.34 0.69 0.72 0.81 0.056 0.047 0.075 0.49  ⍝ values
    

    Standard operations are then immediately accessible:

        k v⍪←↓⍉↑(2 0.33)(2 0.01)(3 0.92)  ⍝ insert values
        k{str[⍺] ⍵}⌸v                     ⍝ pretty print
        k v⌿⍨←⊂k≠str⍳⊂'buggy'             ⍝ deletion
    

    What I find really nice about this approach is that each expression is no longer a black box, making it really natural to customize expressions for specific needs. For example, insertion in a hashmap would normally need to have code for potentially adding a new key, but above we're making use of a common invariant that we only need to append values to existing keys.

    If this were a library API, there would either be an unused code path here, lots of variants on the insertion function, or some sophisticated type inference to do dead code elimination. Those approaches end up leaking non-domain concerns into our codebase. But, by subordinating detail instead of hiding it, we give ourselves access to as much domain-specific detail as necessary, while letting the non-relevant detail sit silently in the background until needed.

    Of course, doing things like this in APL ends up demanding a lot of familiarity with the APL expressions, but honestly, I don't think that ends up being much more work than deeply learning the Python ecosystem or anything equivalent. In practice, the individual APL symbols really do fade into the background and you start seeing semantically meaningful phrases instead, similar to how we read English words and phrases atomically and not one letter at a time.

  • Post Author
    cess11
    Posted April 25, 2025 at 6:25 am

    Last year The Array Cast republished an interview with Iverson from 1982.

    https://www.arraycast.com/episodes/episode92-iverson

    It's quite interesting, and arguably more approachable than the Turing lecture.

    In 1979 APL wasn't as weird and fringe as it is today, because programming languages weren't global mass phenomena in the way that they are today, pretty much all of them were weird and fringe. C was rather fresh at the time, and if one squints a bit APL can kind of look like an abstraction that isn't very far from dense C and allows you to program a computer without having to implement pointer juggling over arrays yourself.

  • Post Author
    FilosofumRex
    Posted April 25, 2025 at 7:25 am

    Historically, speaking what killed off APL (besides the wonky keyboard), was Lotus 123 by IBM and shortly thereafter MS Excel. Engineers, academicians, accountants, and MBAs needed something better than their TI-59 & HP-12C. But the CS community was obsessing about symbolics, AI and LISP, so the industry stepped in…

    This was a very unfortunate coincidence, because APL could have had much bigger impact and solve far more problems than spreadsheets ever will.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.