– the first course in the specialization has a very good and engaging start, but the gap between lectures and problems widens quickly after that (maybe that's why the author boasts about a "challenging" course "not for everyone"). I'm hesitant to take the next course of the specialization.
MIT had a similar course on edX and that one was brutal as well (Computational Probability and Inference). I guess nobody figured out how to teach it the easy way.
I've been meaning to either take the course or read the book. I'm curious if you've read the book [0] and how you would compare or if you'd recommend one, the other or both.
For me Bayesian networks are a tool of thought and I think it's worthy to learn them in the same way it's worthy to learn to sketch functions with pen and paper.
Yes, I think so. PGM or some improvement on that is relevant to doing things like reasoning correctly based on evidence.
I envision more sophisticated AGI systems to use DNN or other NN techniques to learn about the world, and be able to take in uncertain input and make sense of it. PGM or similar would then be used to correctly (in the mathematical sense) reason about what to do to accomplish the agent's goals.
I too took the first iteration, and it was... quite terrible.
The lectures were OK, but the homework was more than challenging. Not only had you to battle with the topic itself, but then you need to magically acquire knowledge in some totally unknown topic (I think it was genetics) and wrangle the quite baroque representation of that topic shoehorned in a programming language that is totally not made for it.
I was on the verge of despair because of those secondary problems. Really.
– the first course in the specialization has a very good and engaging start, but the gap between lectures and problems widens quickly after that (maybe that's why the author boasts about a "challenging" course "not for everyone"). I'm hesitant to take the next course of the specialization.