Hacker Newsnew | past | comments | ask | show | jobs | submit | jmct's commentslogin

Wonderful bit of history!

I'm one of the faculty that teach the current version of this course.

Would you mind getting in contact with us? We would love to learn more about PL/UM. It would mean a lot: {jmct|dvanhorn}@umd.edu


Those are my videos!


Thank you Jose. Are these videos available for non-students?

Edit: https://www.youtube.com/channel/UCfZ7HFoaeSA7zPoGelA3aiw


Looks like you found them, and yes. :D

They weren't made with the thought of being generally consumed, but I do plan on making videos for the wider CS/programming community in the future.


Please do, I would absolutely work through this during a vacation!


I'm enjoying those!

Honestly, I had a bit of impostor syndrome kick when you mentioned that the 330 course was required, being self taught and all... but I know OCaml so ¯\_(ツ)_/¯

Currently going through the Crafting Interpreters book and this seems like an excellent supplement for my journey into the world of language hacking.

Thanks a lot for posting those, they are very clear and encouraging!


Apologies for causing a spike in your impostor syndrome!

330 is 'required' in the sense that you're not allowed to take 430 at UMD without having taken 330.

Perhaps a better way of phrasing it would have been to list the things we assume students have retained from 330. One of which is OCaml :D

I hope these videos provide some value to you! Always feel free to reach out if there's anything you think I could do better.


That's true in one direction, but not the other.

The other angle is that concurrent programs are still concurrent on a uni-processor. Parallel programs are not.


What is a "concurrent program?" If concurrency is about program structure, does that structure have to be explicit? If I rewrite a loop to support OOE is it thereby concurrent?

A uniprocessor may be pipelined, superscalar, support SIMD, etc which would seem to satisfy the definition of parallelism. So isn't parallelism achievable on a uniprocessor?

The more I think about this stuff the less sense the distinction makes.


I would define a concurrent program as one with multiple independent flows of execution. Whether they are scheduled on multiple CPUs at the same time or just one is what makes them parallel or not.

If you look at a what happens inside a CPU then you probably could say a uniprocessor has some degree of parallelism, but this is not a common point of view AFAIK, it is too low level. Also, the execution contexts inside a pipelined CPU are not entirely independent, you can have pipeline stalls due to dependencies. This is not the case with, say, two application threads running independent tasks.


> So isn't parallelism achievable on a uniprocessor?

Of course! Normally unqualified parallelism refers to multiprocessor parallelism, but instruction level parallelism is also a thing. So is overlapping io or memory access with computation. Even memory accesses themselves can be parallelized (caches are multi ported, CPUs allow multiple outstanding cache misses), or IO (multiple disks or network cards).


From what I recall, Quake existed because of instruction level parallelism between apu and fpu. This indeed has importance.


Just to add on to this:

The languages people use at Galois lean towards functional programming languages: Haskell, Coq, etc. and systems languages: C, Rust (increasingly).


The reason GHC won't infer the definition of Applicative is because there can be multiple valid `Applicative` instances for a type (unlike `Functor` where there is a unique (non-trivial) instance).

The canonical example of this is lists, with 2 valid instances of `Applicative`.

The author of the post seems to have this realization, but I wanted to call it out, just in case.


Any Applicative can be turned backward, although the backward way will not necessarily be compatible with the Monad instance, and also sometimes the backward one is same as the original way.


That's true, but only one of the instances is 'compatible' with the Monad definition which requires (<*>) = ap (the first 'recipe' he showed for implementing Applicative)


I think that law is slightly too restrictive; there’s a useful class of Monad instances where it doesn’t hold: commutative monads, in which:

    x <- f
    y <- g
Is equal to:

    y <- g
    x <- f
Provided that “x” is not free in “g”. A good example is an async monad that provides concurrency in (<*>) and only blocks when there’s a data dependency in (>>=)—the result is the same regardless of evaluation order, but the implementations (and performance characteristics) are different. You could handwave it away by arguing that they’re “morally equivalent”, but I think it pays to be precise about properties like commutativity of actions.


> the result is the same regardless of evaluation order, but the implementations (and performance characteristics) are different. You could handwave it away by arguing that they’re “morally equivalent”, but I think it pays to be precise about properties like commutativity of actions.

You have to decide whether they're equivalent or they're not, and if they're not equivalent (for your purposes) you probably shouldn't provide both instances. If you implement <hnmarkupisbad> and ap but they're not equivalent, a future maintainer will get a nasty surprise sooner or later.


I mean if you can’t observe any difference from safe code, then despite having different intensional/operational definitions, they are extensionally equivalent, and that’s mostly what I care about.

The ApplicativeDo desugaring, which uses the Applicative combinator instead of bind in exactly the circumstance I described above, was added specifically to support this use case when we (at Facebook) were writing Haxl, which is just such an example of a commutative monad; it’s built on IO, but doesn’t expose IO to safe code, so there’s no way to observe the concurrency from inside Haxl.

Granted, I’m biased because I have a somewhat unpopular definition of “effect” and “side effect”—if an effect can’t be observed from code by a particular observer that is safe wrt some property, then to me it’s not a side-effect.

It’s pretty widely accepted that within an ST action, mutating an STRef is a side effect to any observers within that action, but from the POV of the caller, the code is pure and thus side-effect–free; but I argue that mutating an IORef within an IO action is also not side-effectful, say from the POV of another thread, if the IORef is never shared—e.g.:

    -- Morally equivalent to “pure 1”.
    do
      x <- newIORef (0 :: Int)
      modifyIORef' x (+ 1)
      pure x
Again, all I’m really saying is that you need to be precise about what model of effects you’re talking about, and what properties you guarantee re. safety, commutativity wrt other effects, &c.


> if you can’t observe any difference from safe code, then despite having different intensional/operational definitions, they are extensionally equivalent, and that’s mostly what I care about.

That's true if the only things you care about are the things that are visible from safe code. But if you care about performance characteristics (which is presumably the whole point of this kind of monad) then code that gives the same result but has different performance characteristics is not equivalent for your purposes.

Ultimately, if x and y are marked as equivalent then a future maintainer will expect to be able to blindly replace x with y - and by convention that's true of <hnmarkupisbad> and ap. So you shouldn't define <hnmarkupisbad> and ap in such a way that replacing one with the other will change important characteristics of the code - you should only make things look equivalent in your code if they are equivalent for your purposes.


Was that last line supposed to be "readIORef x" instead of "pure x"? If you return the IORef then the result is equivalent to "newIORef 1", not "pure 1"—and newIORef does have observable side-effects. (Two IORefs are distinct even if they hold the same value.)


The problem of getting Homomorphic Encryption in the hands of data scientists is what the RAMPARTS project we're working on at Galois is about (https://galois.com/project/ramparts/). We presented some preliminary results at JuliaCon last year: https://www.youtube.com/watch?v=_KLlMg6jKQg

The main difference to the work from the article is that RAMPARTS is _not_ a API wrapper. It allows the the Julia Programmer to write the function as they normally would, and then compiles it to FHE. This way the programmer can use _the same_ function for plaintext testing and development and for running under FHE.

Since the JuliaCon talk we've pushed this a bit further and actually coming to a close on the project. We learned a lot about the difficulties of making FHE invisible to the user, which was the aim of the project. We're hoping to continue this work in the future and make it even more seamless for developers.


It's a funny story, but it's also not true. You don't want pencils in space (not because they don't work, but because you don't want graphite dust everywhere).

A source: https://www.scientificamerican.com/article/fact-or-fiction-n...


Funny, funny story.

The US used graphite and paid ridiculous amounts of money for it. The USSR greased pencils or out right crayons instead.


That's why I said "anecdote", i.e. "an account regarded as unreliable or hearsay."


I like the part where you skipped right over "a short and amusing or interesting story about a real incident or person" so you could quote "an account regarded as unreliable or hearsay".

It's okay to be wrong. I distinctly remember believing the (false) pencil story once, before reading more about it and learning the (far more interesting) actual story. And also realizing that the false pencil story pushes certain goals and agendas, too, so probably isn't accidental.


Galois does not work on financial products. It is a research services company (i.e. R&D for hire) that focuses on high-assurance software.


Haskell does not do this. Haskell is lazy (computed results are shared), but Haskell does not memoize by default. To illustrate:

    func y = let z = y + y
             in (add 1 z) * (add 1 z)
in `func` the result of computing `y + y` is shared because it has been given a name: `z`. However, the result of `add 1 z` is _not_ given a name and therefore has to be computed twice. Because Haskell is pure, they will result return the same result, however the result is not saved.

This avoids needing to save the results of _all_ function calls.


That's a bad example, since GHC will perform common expressions elimination, which effectively give the same name to the two `add 1 z` (unless you specifically ask it not to, with `-O0`.)

Furthermore, Incremental isn't memoizing every function calls. It just keeps the intermediate results of the latest computation in memory, so that when the inputs change, it can determine exactly which subresults need a recomputation, and which can be reused. It's like Excel, but with dynamic dependencies.

(But yes, Haskell definitively does not do this. You could say that Haskell is about doing the least amount of work to compute something fresh, while Incremental is about working the least to recompute something we (almost) already computed just before.)


A better example might have been

    func (f (add 1 z), f (add z 1))
Since f is not memoized it will be evaluated once for each component of the tuple.


Do you live in a big (or bigger than average) city?

I live in the UK in a small town and can not imagine having kids but no car. Most of my friends find having a car is essential even without kids (though I prefer to not have a car unless 100% necessary).

On my own I am fine with public transport, but anything between walking around the village or going to a big city isn't really covered by public transport.


My sister has three children, zero cars and some (OK, about ten) bicycles, including a cargo bike and a couple of trailers. They live in Edinburgh, which has a fair to medium public transport network, and manages to survive. I've travelled around with them and the kids and it really isn't a big deal, including long-distance journeys involving trains. One thing that does require a car is getting to out-of-the-way locations, we had to rely on relatives with vehicles for a trip out to the Museum of Flight, about 20 miles away and not at all accessible by public transport. We could probably have used the 'City Car Club' (similar to 'ZipCar') for that, if really necessary, although with children you then need to provide appropriate car seats, which is a hassle.

Anyway, for day-to-day travel and commuting, even with children, cars are not required in a medium sized city like Edinburgh.

One interesting (albeit off-topic) point she and her husband made to me was that having kids in push-chairs/buggies suddenly makes you appreciate the issues that disabled people in wheelchairs must have all the time for access to buildings.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: