Of Course ML Has Monads!


A popular meme in the world of PL’s is that “Haskell has monads”, with the implication that this is a distinctive feature of the language, separate from all others.  While it is true that Haskell has popularized the use of monads as a program structuring device, the idea of a monad is not so much an issue of language design (apart from the ad hoc syntactic support provided by Haskell), but rather one of library design.  After all, a monad is just one of a zillion signatures (type classes) with which to structure programs, and there is no particular reason why this one cannot be used in any language that supports even a modicum of modularity.

Examined from the point of view of ML, monads are but a particular of use of modules.  The signature of monads is given by the definition

signature MONAD = sig
  type 'a monad
  val ret : 'a -> 'a monad
  val bnd : 'a monad -> ('a -> 'b monad) -> 'b monad
end

There are many, many, many structures that satisfy this signature; I needn’t (and, in any case, can’t) rehearse them all here.  One particularly simple example should suffice to give the general idea:

structure Option : MONAD = struct
  type 'a monad = 'a option
  fun ret x = SOME x
  fun bnd (SOME x) k = k x
    | bnd NONE k = NONE
end

This is of course the option monad, which is sometimes used to model the data flow aspects of exceptions, perhaps with some elaboration of the NONE case to associate an exceptional value with a non-result.  (The control flow aspects are not properly modeled this way, however.  For that one needs, in addition, access to some sort of jump mechanism.)

Examples like this one proliferate.  A monad is represented by a structure.  Any structure that provides the facilities specified by the MONAD signature gives rise to the characteristic sequentialization mechanisms codified by it.  Monad transformers are functors that transform one monad into another, with no fuss or bother, and no ad hoc mechanisms required.  Standard modular programming techniques suffice to represent monads; moreover, the techniques involved are fully general, and are equally applicable to other signatures of interest (arrows, or quivers, or bows, or what have you).  Moreover, it is shown in my paper with Chakravarty, Dreyer, and Keller how to integrate modules into the type inference mechanism of ML so that one can get automatic functor instantiation in those limited cases where it is self-evident what is intended.  This has been implemented by Karl Crary in a prototype compiler for an extension of Standard ML, and it would be good to see this supported in more broadly available compilers for the language.

The bulk of the mania about monads is therefore accounted for by modules.  I have no doubt, however, that you are wondering about the IO monad in Haskell.  Isn’t that a fundamental feature of the language that cannot be replicated in ML?  Hardly!  It’s entirely a matter of designing the signatures of the standard basis library modules, and nothing more.  The default basis library does not attempt to segregate effects into a monad, but it is perfectly straightforward to do this yourself, by providing your own layer over the standard basis, or to reorganize the standard basis to enforce the separation.  For example, the signature of reference cells might look like this:

signature REF = sig
  type 'a ref
  val ref : 'a -> 'a ref IO.monad
  val ! : 'a ref -> 'a IO.monad
  val := : 'a ref -> 'a -> unit IO.monad
end

Here we are presuming that we have a fixed declaration

structure IO : MONAD = ...

that packages up the basic IO primitives that are already implemented in the run-time system of ML, more or less like in Haskell.  The other signatures, such as those for mutable arrays or for performing input and output, would be modified in a similar manner to push effects into the IO monad.  Et voila, you have monadic effects.

There’s really nothing to it.  In fact, the whole exercise was carried out by a Carnegie Mellon student, Phillippe Ajoux, a couple of years ago.  He also wrote a number of programs in this style just to see how it all goes: swimmingly.  He also devised syntactic extensions to the Moscow ML compiler that provide a nicer notation for programming with monads, much as in Haskell, but better aligned with ML’s conventions.  (Ideally it should be possible to provide syntactic support for any signature, not just monads, but I’m not aware of a worked-out design for the general case, involving as it would an intermixing of parsing and elaboration.)

My point is that the ML module system can be deployed by you to impose the sorts of effect segregation imposed on you by default in Haskell.  There is nothing special about Haskell that makes this possible, and nothing special about ML that inhibits it.  It’s all a mode of use of modules.

So why don’t we do this by default?  Because it’s not such a great idea.  Yes, I know it sounds wonderful at first, but then you realize that it’s pretty horrible.  Once you’re in the IO monad, you’re stuck there forever, and are reduced to Algol-style imperative programming.  You cannot easily convert between functional and monadic style without a radical restructuring of code.  And you are deprived of the useful concept of a benign effect.

The moral of the story is that of course ML “has monads”, just like Haskell.  Whether you want to use them is up to you; they are just as useful, and just as annoying, in ML as they are in Haskell.  But they are not forced on you by the language designers!

Update: This post should’ve been called “ML Has Monads, Why Not?”, or “Of Course ML Has Comonads!”, but then no one was wondering about that.

Update: I now think that the last sentence is excessive.  My main point is simply that it’s very simple to go one way or the other with effects, if you have modules to structure things; it’s all a matter of library design.  A variant of ML that enforced the separation of effects is very easily constructed; the question is whether it is useful or not.  I’ve suggested that the monadic separation is beguiling, but not clearly a great idea.  Alternatively, one can say that we’re not that far away from eliminating laziness from Haskell, at least in this respect: just re-do the standard basis library in ML, and you’re a good ways there.  Plus you have modules, and we understand how to integrate type classes with modules, so the gap is rather small.

Update: Removed inaccurate remarks about unsafePerformIO.

64 Responses to Of Course ML Has Monads!

  1. xyz abc says:

    The syntax sugar for MonadPlus in Haskell is useful. It allows you to perform list (monadic) comprehensions in which you only include elements that satisfy a pattern. For example, let’s say you want to get the first element of every nonempty list in ys; you would do that as [x | (x:xs) <- ys]. Desugaring that gives you something horrendous. Allowing predicates in a monadic comprehension of the form "x <- y" where x is a pattern is a good alternative to using boolean-blind higher order functions like filter. Since Standard ML doesn't provide a decent way to do this, SML doesn't provide an acceptable syntax for MonadPlus or list comprehensions in particular.

    Like

  2. notanotheraccount says:

    Professor,

    Somewhat off topic, Haskell’s older sibling Clean relies on uniqueness typing rather than monads. I’m curious how that looks in the world as you see it. Thanks.

    Liked by 1 person

  3. Ashley Yakeley says:

    “And you inevitably need unsafePerformIO to get anything serious done. In practical terms, you are deprived of the useful concept of a benign effect, and that just stinks!”

    You don’t need unsafePerformIO to get almost anything serious done. Pretty much the only time you need it is when you want “benign” effects, which turns out to be rarely.

    Haskell’s “pure” function type is extremely helpful in reasoning about effects. I can’t imagine wanting to program in a language without such a type.

    Like

  4. roshanjames says:

    While what you say is valid, in my limited experience, I have noticed that the notion of “what things are values” are sometimes lost on ML (specifically OCaml) programmers.

    I am referring to the “Moggi principle” here: informally that monadic computations behave as values in pure contexts – such as in (let x = print “hello” in do x;x).

    As an example of this value abuse, consider the OCaml Lwt library that exposes a monadic interface but do not follow monad laws: Lwt.t types are effective in pure contexts.

    This is a social observation – there is no technical reason why this should be so. I suspect it is a side-effect of CBVness on programmer brains. Much like the point you and SPJ make about the social reasons that prevent the mixing of laziness and side-effects like assignments. (To clarify, I am not suggesting that monad laws follow from the host language being lazy.)

    Like

  5. Ston Dewart says:

    programming in Haskell is rather like programming in an updated dialect of Algol

    It is uncalled for to insult a programming language like that. Comparing it to such a piece of crap is worse than insulting someone’s mother. Please retract you poisonous words.

    On behalf of all ALGOL 68 users,

    Ston Dewart.

    Like

  6. eleganesh says:

    How would you implement something like Haskell’s mapM in SML?

    Like

  7. I’m slightly confused – I don’t see how monadic IO is solely a library issue. An important feature of Haskell’s design is that there is no primitive to actually execute an IO action (ignoring unsafePerformIO, which isn’t an “official” part of the language). IO actions are only executed by external interpretation of the “main” value. In other words, the language proper has to be aware of the IO monad, and “main” is a minimal hook it has to provide to enable bootstraping the monadic library without loopholes.

    Like

    • Abstract Type says:

      I don’t understand the question, can you rephrase?

      Like

    • jamesiry says:

      I’ll give it a shot.

      One way to model Haskell is that main builds an object of type IO (), and only the thing calling main has the power to further interpret that object to the point where effects can occur.

      Another model is to say that Haskell’s IO monad is designed around threading a world state through a series of computations. What keeps pure code from silently performing effects is that a) the monad keeps the world state “locked away” from ordinary code as it is threaded through and b) the only thing that can generate an initial world state is the thing that calls main.

      However you slice it main seems special in Haskell.

      Like

    • Assume I replace SML’s print : string -> unit with something of type string -> unit IO.monad (like Haskell’s putStr :: String -> IO ()). Then print “hello” would no longer produce an effect on its own, but just create a first-class monadic action. How would I execute that action?

      In Haskell, it is executed only if it becomes part of the global value of main :: IO a, i.e., the language requires that there is a main of IO type, and the runtime system knows what to do with it.

      The only other possibility I see is to introduce unsafePerformIO : ‘a IO.monad -> unit as a legitimate library primitive, and thereby throwing all typing guarantees about effect encapsulation out the window.

      Or am I missing something?

      Like

    • Abstract Type says:

      Oh, what we did for that was to re-do the top-level loop so that there is a command “do e” (or similar syntax, I forget exactly) that executes e for its effect. It is not an ml function, it is rather part of the interactive system. In a batch compiler the equivalent would be running the a.out (so to speak), again not part of the language.

      Like

    • I don’t quite follow you there. Isn’t adding “do e” to the REPL effectively extending the language? And what does “running a.out” mean, unless the language somehow defines what the IO action produced by a program is?

      Maybe this is splitting hair about where the boundaries of the language are, but to me it seems quite a stretch to say that such fundamental semantic magic is not part of the definition of the language — especially when you’re usually arguing for viewing everything from a language perspective ;). Do we need to talk about the “compiler” or the “runtime system” after all?

      Like

    • Abstract Type says:

      It’s a good point, but it seemed entirely natural to me to have an expression that evaluates to an IO(string), say, and then to demand that something be done with it once the value has been determined. This process of connecting to the outside world is extra-linguistic.

      Like

  8. neelk says:

    In practical terms, you are deprived of the useful concept of a benign effect, and that just stinks!

    I find this a very surprising comment, especially coming from you!

    So, I take “benign effect” to mean that a function is permitted to use effects, but must continue to be extensional. In general, there will be terms which are extensional, but which are not definable in the pure sublanguage.

    This gives something a bit like an extensional or realizability-style interpretation of types, where the meaning of a type is given by its realizability conditions, rather than an intensional or type-theoretic interpretation, where the meaning of a type comes from its intros or elims.

    Given the existence of this gap in interpretations, I sort of expected you to stand firm for proof theory, against the seductive lure of adding constants/axioms to a language/logic. :)

    Like

    • Abstract Type says:

      I didn’t say anything about extensionality. But in the course of my research life I have been on both the realizability side (NuPRL) and the proof-theoretic side (LF).

      Like

  9. ppgallardo says:

    Dear Prof. Harper,

    According to your own comments:

    I simply meant that it is not clear whether monadic isolation of effects (esp store effects) is a good idea, because it precludes benign effects.

    … for the sake of parallelism: you can’t evaluate a Sequence.map in parallel unless you know that the function being mapped is effect-free.

    It seems that there is a conflict here. You can either have effects isolated by default (Haskell), which is good for parallelism, but requires some mechanism like unsafePerformIO to use begin effects,
    or you can have unsafe free access to effects by default (ML), which is good for begin(safe) effects, but you would need your own effect system to use parallelism. I don’t see clearly that this second approach is a better one.

    Like

    • ppgallardo says:

      Sorry, I meant benign effects

      Like

    • Abstract Type says:

      Exceptions are perfectly compatible with parallelism; it is only storage effects (which includes I/O that creates a problem. If we isolate storage effects using a re-organized library in the manner I described, we get what we need for parallelism. I consider this significant, but you may not. The concept of purity is not entirely clear, it seems to me.

      Like

    • neelk says:

      When you say exceptions are compatible with parallelism, do you mean exceptions with or without handlers?

      In the absence of handlers you’re obviously correct (and it synergizes beautifully with strictness), but if there’s a catch-form then it seems to me that the naive parallel semantics becomes nondeterministic.

      Using par(e,e') to indicate a parallel tuple, consider the expression:

        par(raise (Error 1), raise (Error 2))
        handle 
          Error n => n 
      

      This seems like it may give different answers depending on the scheduling of the left and right components of the pair.

      Like

    • Abstract Type says:

      With handlers, of course, otherwise they wouldn’t be called exceptions! Your example shows only that the join-point logic is more subtle than you are imagining it to be. The crucial point is to ensure that parallelism does not change the semantics of the program. So, in this case, we must ensure that the leftmost exception is the one that is propagated, if any, because that is what would happen in the sequential case. The join point logic ensures that this is the case; I won’t bother to describe here how that is done, it’s not hard to imagine.

      Interestingly, we can also admit other effects, such as printing, by the same means: suspend the output until the join-point is reached, then print in left-to-right order. Input doesn’t seem susceptible to this kind of treatment, but random seeds easily are, because they can be split. We rely on this to implement treaps in parallel (though there are other means based on fingerprinting that avoid the need for random numbers).

      Crucially, the forms of parallelism we consider always have a well-defined join point. Without that, I cannot envision a way to admit effects, even exceptions, in parallel without affecting the sequential meaning. So, for example, parallel futures are out of the question because there is no well-defined join point. Indeed, this is precisely the same reason why it is so awkward to manage exceptions in a lazy language, among other things.

      It seems that I ought to have explained my motivations for this post more thoroughly up front. It seemed that I could just break out the one point, re-defining the interfaces of some of the basis libraries, to get the isolation of effects that I want to ensure absence of race conditions in parallel. My mistake.

      Like

  10. mqtthiqs says:

    Thank you for the great post (and the great blog). Is the work from Philippe available somewhere? I’m confused, for the same reasons as Andrej is, and would love to see the details.

    Like

  11. ee8a91jjf says:

    you are deprived of the useful concept of a benign effect

    Is there a one-size-fits-all set of criteria for what counts as benign?

    A random number source (perhaps the “most benign”) breaks referential transparency, mutable references break unrestricted intersection types …. it seems like what is benign from one perspective might not be benign from another.

    In the absence of such criteria, might it not be a bad idea to advertise the “level of benignness” of an expression’s effects in its type?

    Like

  12. ee8a91jjf says:

    Agda and Coq have types too… ;)

    Like

    • ee8a91jjf says:

      Sorry, “types” should be “monads”.

      It appears that I cannot delete or edit my comments :(

      Like

  13. dagit says:

    I’m sorry if I’m crossing a boundary of politeness by pointing this out, but I think it needs to be said so here I go…

    I don’t know if you realize this, but when you post like this you’re not making friends in the Haskell community.

    This post makes false claims about Haskell. Claims that are pretty clearly false to people who regularly write Haskell programs/libraries. When a post like this hits reddit, people in the Haskell IRC channel start discussing it. Just a few more posts like this one and most people will see Harper as another way to spell Harrop. Which is to say, Haskellers won’t take you seriously anymore.

    Haskell is not perfect, but no language is. I would say no programming language is good enough to use on technical merit alone. Haskell is a good language with a nice implementation, nice libraries and an even nicer user community.

    I don’t know ML but I assume it also has nice implementations and nice libraries. This post makes me think that the ML user community is starting to look like the common lisp user community. Has the ML community become grumpy, decaying, and envious of other languages? If so, I would avoid it just for that reason. For example, I stopped using common lisp for Haskell based on community and static types.

    If your point is to get people learning and using ML instead of Haskell, then I think posts like this do more harm than good.

    Like

    • Abstract Type says:

      My purpose in writing is neither to make friends nor to make enemies. I’m trying to explain the world as I see it, and in particular to justify the reasons why we are teaching things the way we are teaching them, which in some cases are contrary to what seems to be conventional wisdom (eg, that one would necessarily use Haskell to teach FP). I could simply not explain anything, but then people would say that my positions are simply religious, rather than based on particular technical points. I did not knowingly make any false claims about Haskell, which is distinct from holding views that are unpopular in some quarters. If I am wrong about some point, then it’s a mistake, and it will surely be corrected here.

      Like

  14. Yin Wang says:

    In an ironic twist the emphasis on monads in Haskell means that programming in Haskell is rather like programming in an updated dialect of Algol with a richer type structure than the original, but the same overall structure.

    I can’t agree more with this. It almost looks like imperative programming. Just look at the source code of GHC and you can feel this ;-)

    There may be better ways than monads.

    Like

    • eyall says:

      Monads allow for imperative programming, amongst many other things. Imperative programming is just one special use case of Monads. They are very useful for many other structures (see parser combinators, non-determinism, exceptions, etc).

      GHC is indeed written in an imperative style, and some in the Haskell community voice concern over that — but of course that is a choice of style that could go either way. There are plenty of functional-style Haskell compilers.

      Like

    • Yin Wang says:

      You made good points. I looked at the source code of JHC, and it looks a lot more functional than that of GHC. Thanks, eyall.

      Like

  15. pigworker says:

    Monads arrived in CS as a way to give an on-paper semantics to the crime that well-typed ML programs get up to while they’re “not going wrong”. ML had a monadic explanation before Haskell had monadic anything. What went wrong?

    Like

    • Abstract Type says:

      Dunno. The need for something was greater in a lazy language than in an eager one, I suppose.

      Like

  16. ejgallego says:

    Dear Prof. Harper,

    I agree that monadic style can be used in any programming language.

    But I disagree on your view that monads are not a great idea. IMHO a monad is for a computation the same that an abstract data type is for an structure.

    That way you can build “abstract computations”. It is similar to other approaches using modules? Yes, but it also has particular advantages over them.

    Like

    • Abstract Type says:

      I simply meant that it is not clear whether monadic isolation of effects (esp store effects) is a good idea, because it precludes benign effects.

      Like

    • ejgallego says:

      Two points:

      IMHO, you practically identify and focus on monads with effects. There are a lot of monads that have nothing to do with effects, they are just abstract concepts of computation, like “computations that read” “computations that fail” etc…

      Well I think the problem is a different point of view, what you call “benign effects” may be understood as some sort of abstract computation. However, if the effect is “benign” and you can provide a proof of it, then using unSafePerformIO is not so troubling, so I don’t understand the point, you can use “benign” effects in Haskell.

      Like

    • Abstract Type says:

      The first part of my post simply points out the obvious, that monads are not inherently about effects. I then, clumsily, tried to explain how one might integrate monadic storage effects (I should have been more clear, I was not thinking about non-termination or exceptions) into the ML libraries, by a simple change of interface. We did this as an experiment in order to better support parallelism by avoiding anything that would create indeterminacy. It was remarkably simple to redo the basis interfaces to isolate storage effects; it is much harder to change the parser to add syntactic support. (This is clearly not a library issue, but it might be nice if we didn’t have to support it on an ad hoc basis.)

      Like

    • Abstract Type says:

      I am not clear on whether one is supposed to use unsafePerformIO or not. If so, the contextual equivalence becomes substantially finer, and may even coincide with ML’s. If not, then I don’t see a way to get benign effects, which can be useful, at least practically and arguably conceptually.

      Like

    • meteficha1 says:

      Use the ST monad.

      Like

    • Abstract Type says:

      If you mean to use runST, that doesn’t work in cases where the store persists, as, for example, in laziness itself.

      Like

    • eyall says:

      You are supposed to use unsafePerformIO when your IO action is pure:

      * Has no externally visible effects

      * Is referentially transparent, has same result for same inputs

      You might consider the pure function (+) to be implemented via side-effecting operations (e.g: Save registers, load into registers, add, restore registers). While each of these individual steps is impure, the composition of them may be pure — and is where unsafePerformIO is warranted.

      Like

    • chrisdone says:

      Harper,

      For what it’s worth, the need for unsafePerformIO is rare, but the use cases exist, obviously; for benign effects in one specific case, the ST monad with the pure function runST :: (forall s. ST s a) -> a which will run its argument impurely (allowing mutable access to the state `s’), but the whole computation is of course referentially transparent. I believe this is also available in Clojure in some form.

      Otherwise I have had no need to use unsafePerformIO myself nor, in my experience, have many others.

      Interesting post.

      Like

    • sclv says:

      I do think (PL) monads are about effects — but “effects” encompasses nondeterminism, exceptions, capturing the stack, probability distributions, substitution in general, etc. As a loose approximation, I guess we could classify “effects” as those things which are not expressable in a direct style (and in those two words is all the hand-waving) in the simply typed lambda calculus, but which are expressable through a global substituting transform. I’m sure there have been more precise definitions in papers I can’t recall offhand.

      As for unsafePerformIO, it was introduced mainly for the FFI if I remember correctly. Other than that, the only legitimate uses I know of are for variations of caching and memoization. There’s a discussion of this, “benign side effects” and all, in Jones, Marlow, and Elliott’s “Stretching the storage manager” paper.

      As you note, Haskell’s typeclasses permit somewhat more polymorphism than ML modules, but otherwise, yes, one can implement a typed seperation of effects in a wide variety of languages, given sufficient control over the base libraries. I do think, however, that you discount the various paradigms for effectful Haskell code that are out there — including, but by no means limited to various forms of FRP, as well as applicative or idiomatic libraries, etc. And of course these paradigms aren’t unique to Haskell either, but the point is that with-or-without monads they enjoy certain benefits provided by at least a coarse-grained typed seperation of effects.

      And finally, as far as purely imperative code goes, is there really much newer or better than ALGOL ’68 anyway?

      Like

    • Abstract Type says:

      I don’t know much about Algol-68; I must’ve read a paper on it somewhere along the way. All I remember was the extensive special vocabulary, like “procety moid” for “procedure type, mode void”, which I think means something like “A->1” here in modern times. I did write about an updated formulation of Algol-60 in PFPL, drawing out the underlying store monad, eliminating the dependence on cbn, and connecting with more recent type systems for mobile computation (surprisingly, at first, but quite natural once you think of it).

      Agreed there are many program structuring devices around; that was kind of my point in part 1 of my post. In part 2 I neglected to stress that the point of the exercise was to eliminate nondeterminacy in the presence of parallelism; that was what Philippe’s project was about. By introducing a monadic separation for the store and for I/O we can be sure that the deterministic semantics is preserved (taking care at join points to propagate exceptions properly). Randomness is an interesting case, because it is basically stateful, yet compatible with parallelism, because there are ways to split a random seed into two without loss of generality. This is one way to implement treaps in parallel, which Philippe also developed. At no point did we intend that exceptions or non-termination be regarded as other than “pure”. To me, only exception allocation (name generation) is impure, because it involves centralized state to ensure global unicity. The control mechanism is pure, as would be call/cc if we were to have it.

      Like

  17. eyall says:

    If you re-define the standard basis library with different signatures, you’d no longer have access to the underlying primitives such as printf.

    Then you’re in new-language territory, and outside of library territory. Modifying all existing impure libraries of ML before being able to use them is not very modular.

    Like

    • Abstract Type says:

      My point is that you can deploy the existing machinery of the ML language to define the libraries however you wish. It’s not a redesign. In fact it’s a trivial exercise.

      Like

  18. And for what it’s worth, Scheme “has monads”, too. You can write programs in monadic style in any language that’s sufficiently expressive. It’s a question of which batteries are included.

    Like

    • Abstract Type says:

      The whole point of a monad is the type, so I would say, not really in the case of Scheme. Unless you’re using typed Scheme, of course.

      Like

    • I think I was failing to distinguish between monads and monadic style. Let me take another stab at it: the point of a monad is to package up in a signature the essential building block of monadic style, whereas the point of monadic style is to write programs that simulate the control-flow behavior of effectful programs, but do it without using effects.

      Even if one can’t do the former in Scheme, one can absolutely do the latter. Of course, that’s not to say that I think doing the latter without the former is a good idea. To the contrary, what you end up with is quite terrifying. I merely wanted to point out that it’s possible — and, for a few people I know, maybe even fun.

      Like

    • Yin Wang says:

      I agree with Lindsey. Monads can indeed be separated from types. Wadler’s first paper on monads The Essence of Functional Programming used code that was not necessarily typed. By that I mean, the code can be translated into Scheme directly by just stripping off the types. A monadic value is just a bundle of a pure value and elements to carry effects (be it state, continuation, or some other effect). We just need monadic values (not necessarily typed) in order to use monads. Type systems are helpful, but they are not essential to monads.

      Wadler didn’t use type classes either. He used different sets of names for unit, bind and return. That’s a little awkward and type classes make code a lot cleaner, but didn’t necessarily obscure the essence.

      Usually in Haskell programming, there are three things interleaved together: monads, types, type classes. But they have separate identities.

      Like

    • Yin Wang says:

      Sorry… please delete the return in my earlier reply. It is the same things as unit. I think in the classical triple of a monad— (M, unit, bind), we only need unit and bind. The M part is an assurance from the type system, but is not an essential part.

      unit wraps the pure value and creates a monadic value; bind takes it apart and transform it into a new monadic value. If we want the monadic value to be typed, we have the option to use M. But we can also risk not to have it without losing the essence.

      Like

    • Franklin says:

      > The whole point of a monad is the type

      Unfortunately, that is only part of the point. And important part of the purpose of a monad is the monad laws. This purpose has not been addressed in this post.

      Like

    • Abstract Type says:

      That is a good point, which I did not address.

      Like

  19. dons00 says:

    One minor comment: “And you inevitably need unsafePerformIO to get anything serious done.”

    This isn’t the case, as witnessed by the thousands of useful Haskell packages on http://hackage.haskell.org – including popular apps like xmonad — that do not make use of the IO escape hatch.

    Like

    • Abstract Type says:

      Point taken.

      Update: I should add that when we used Haskell here, we did not find that to be the case. Karl Crary may wish to comment on his mtex experience, which was initially in Haskell, but which he then abandoned.

      Like

    • Took me too long, but I’ve withdrawn the statement. It was my impression at the time, but I accept that it was wrong and have removed it.

      Like

  20. andrejbauer says:

    I think I am a bit confused. One point of monads in Haskell is that the type of an expression tells you about a (possible) presence of effects. For example, unless you resort to dirty tricks, you cannot have a piece of code that performs I/O and its type is int. How would that work in ML? Suppse we redesign the standard library so that all I/O operations have special types that indicate the presence of effects (as suggested in your post). I can still write:

    let f x =
      let _ = print "Hello world"
      in x + 42 
    

    Is it not the case that the type of f is int -> int, no matter how you type print? We could try changing the style of programming by providing explicit continuations to I/O operations, e.g.,

    let g x =
       print "Hello world" (fun () -> x + 42)
    

    But even that won’t work because someone will write

    let h x =
       let _ = print "Hello world" (fun () -> ())
       in x + 42
    

    How do your student’s projects prevent the programmer from creating effects which are not visible in the types?

    Like

    • Abstract Type says:

      If you re-define the standard basis library with different signatures, you’d no longer have access to the underlying primitives such as printf. In effect some parts of the basis are automatically “opened” for you by the compiler; officially, you’d want to type TextIO.printf. But if TextIO were re-defined very slightly, you could give it a different signature. This is what Philippe did a couple of years ago, so that the segregation of effects was enforced. Our reason to try this out was for the sake of parallelism: you can’t evaluate a Sequence.map in parallel unless you know that the function being mapped is effect-free (otherwise you’d have race conditions and all the complications of concurrency).

      Like

    • meteficha1 says:

      @Abstract Type:

      So basically what you are trying to say is that if we throwed away all the standard libraries and reimplemented them like their Haskell counterpart, then ML would be like Haskell. Sorry, but that doesn’t cut it. If I were to reimplement the whole goddamn thing, I wouldn’t be using ML in the first place, I would be using Haskell.

      So yes, ML doesn’t have monads, only Haskell has monads. Not because you can’t have monads in ML, but because you don’t have them *today*. You don’t have a community used to enforce their use. And all this besides the lack of syntatic sugar. You’re missing the point by miles.

      Like

    • Abstract Type says:

      The “reimplementation” you describe is quite trivial and has been done. But in any case my point is that it’s not quite a good idea.

      Like

    • andrejbauer says:

      I am sorry, I don’t see how you’ll be able to enforce types that expose effects by redefining the standard library. If there is any way to get an effect in ML, then I can write a function of type int -> int which causes effects when evaluated, however its type does not indicate that. This is an essential difference between ML and Haskell. And I care about this because there are 3rd order functions which realize certain non-classical reasoning principle in analysis, and I don’t know how to prove them correct (or if they even are correct) in ML because of the mess with effects. So I’d really like to know how to force ML to expose presence of effects in the type. Perhaps a reference will help.

      Like

    • Abstract Type says:

      It depends in part on what you mean by effect. I think that exceptions are pure, for example; not their allocation, which is stateful, but the control aspect. I would admit non-termination because to do otherwise is to encode the proof of totality into the code itself; this can be terrible. So it is likely that what I am suggesting is not pure enough for you.

      Like

    • ee8a91jjf says:

      … effects which are not visible in the types

      +1

      Like

Design a site like this with WordPress.com
Get started