-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should the book be first-axis oriented? #16
Comments
I'm not sure what the argument for For instance, in Python,
is a fairly familiar operation whereas
is much less familiar. ( So it might make sense to start there, depending on the audience. |
There's no (explicit) map here, only reduce. The plus reduction of |
I think I might be using APL terminology incorrectly? What I'm trying to say is that
matches a Pythonic notion of
or, equivalently,
So from my perspective, (sidenote: @abrudz, I'm a big fan, you are an absolute machine and it's exciting to interact with you for the first time!) |
Ah, but that's really the wrong way to think of it. Let's define APL's plus = lambda a,b:[plus(x,y) for x,y in zip(a,b)] if type(a)==list else a+b
reduce1st = lambda f: lambda a: functools.reduce(f,a) Now Therefore, |
tl;dr Great point @abrudz -- I wish MDAPL went deeper into things like this -- not just the how to but the underlying why That is a GREAT point. I had not considered that reduce1st was the more fundamental operation than reduce. It's things like this that still stun me about APL. In fact, I would like to offer that I think insights like this are the only short coming of MDAPL -- I feel like there should be a sequel or some additional insights into the deeper thinking behind it. For instance, in the classic CS curriculum, one of the foundational schools of thought is lambda calculus -- which we are taught is equivalent to a turing machine (aka an infinite length vector) -- and that one of the first things we learn to construct from pure functions is a However, I suspect based on my learning of APL that tensors are a mathematically equivalent way to represent general computation. However my first reading of "Mastering Dyalog" felt like a teaser -- that there are many truths and proofs like the one @abrudz just presented -- that are only hinted at. It also seems like array-oriented programmers are largely partitioned off from the rest of devs so it's hard to bump into that information by accident. So for instance, it's hard to understand what non-trivial day-to-day applications would look like. For instance, writing a web server or parsing a text document -- but it's clearly possible. Not only that, the underlying algorithms for things like the magical inner product function are not treated in MDAPL. Given the importance of tensors to ML/AI, I am really surprised there is not a lot more research and training invested into array oriented programming. In some of the scientific computing circles we're making efforts to understand tensors better and some early experimental efforts and incorporating APL into the language, but I feel like I'm poking a very elegant and complex machine with a stick to try to understand how it works. Besides MDAPL, I'm not even sure where to go to get deeper insight! Anyway thanks for listening to my TED talk and all the great work on MDAPL! |
I'll offer one more point, which is that languages like Python and Clojure have long and detailed notes about why the language was designed in certain ways. For instance, Python has its PEPs and in Clojure we can't shut up about the why's of the language, and Rich has a great history on it. And I can tell that APL and Dyalog had a LOT of thought put into them and some of the decisions were made very carefully. It would be so great to be able to dig more into that thought process. |
The original book is mostly last-axis oriented, e.g. preferring
/
over⌿
when working with vectors.Should we change that?
The text was updated successfully, but these errors were encountered: