This list is woefully incomplete. It should be titled "A Review of Some Tools". Those it doesn't include, or does not give proper mention to in its top table, are: Chainer, DyNet, Paddle and Deeplearning4j.[0]
Even Keras, the third most popular DL library, is given short shrift, even though it is easier to use than TensorFlow, and can handle Theano, TF, CNTK and Deeplearning4j as backends.
All of the missing libs have significant advantages over TensorFlow, which appears to be winning chiefly on Github stars. Notably speed and integrations.
> TensorFlow, which appears to be winning chiefly on Github stars
I was under the impression that a lot of TensorFlow's reputation and image comes from the fact that it's Google-backed and -dogfooded, meaning that it will stay in active and useful development for the foreseeable future.
How do you think it's better? I use both the internal and external and the only difference is that there are more Google-infra specific things internally. The external one is prettier :)
It is definitely the case that a fair few of those 700k sloc are devoted to poking at and wibbling with Google infra specific things. Which is of basically negative value if you're some rando using it.
There are also a fair few non-infra specific bits to the internal documentation.
What's the problem with Golang in TF? It's one of the nice to work in languages and I imagine that Google sees Golang as somehting that will replace Python sometime long term...
I also wanna add Gorgonia (https://github.com/chewxy/gorgonia) - it's not quite production ready on the GPU end, but on the CPU end I've been using it for about 4 years.
The best resource (imo) is "Practical Deep Learning". As the title suggests it is...well practical. They specifically say that a lot of Deep Learning seems to be made complex on purpose and that it's important to get a good overview and start doing stuff immediately. They use old Keggle competitions as a benchmark. After lesson 1 you can already recreate the infamous dog/cat classifier and get pretty amazing accuracy (>90% iirc). It's all free and online, there's also a paid on premise course (in San Francisco iirc). They use Keras (so one level of abstraction higher than the stuff in the post) and make heavy use of Python Notebooks. I also love the approach of using AWS p2 instances for everything...very nice to pay as you go for the GPU power (I wasn't even aware these types of instances existed before watching the videos).
The second course isn't online yet but the claim is that you'll be at the bleeding edge after that one (i.e. compete with state of the art papers). I fully believe it.
Notebooks are perfect for presenting these kind of project based lessons (except that GitHub doesn't render them on mobile). I'm just starting a similar series, entirely in Notebooks, doing natural language tasks with neural networks in PyTorch:
I appreciate the analysis but I don't think the title of this should be Getting Started with Deep Learning.
If you are actually looking to get started with deep learning, you should go elsewhere. This is a review of frameworks and tools people use for deep learning
Why is this titled "Getting started with ..." when it is just an overview of a selected few frameworks? Plus this coming from an R&D department of a company is highly disappointing.
"For instance, Caffe (C++) and Torch (Lua) have Python bindings for its codebase (with PyTorch being released in January 2017), but we would recommend that you are proficient with C++ or Lua respectively if you would like to use those technologies."
You don't need to know any Lua to use PyTorch. That's the point. Lua is for Torch.
Edit: The author may be confusing the PyTorch released by the Torch developers with the earlier pytorch project that did wrap Torch.
I really want to get into this eventually. My own AI I know, I'm not a brilliant mathematician or anything. I just have that obsession (I'm lonely) hahaha.
I'm looking to build these wall-mounted raspberry-pi servers with cute little USB-rubber-ducky antennas. But I don't know what kind of hardware you need (or cloud-base it) to run some of these. My current thought/approach is something that's always on, analyzing stuff (telemetry) specific web traffic, my own thoughts (analyze posts to a journal for example).
I don't know... some point get into it busy at the moment.
Like I'd like to hire some vocalists (girls) and have them recite sentences, deconstruct it (copy it essentially) to be able to build full on-the-fly sentences without those word-break-word types... ahhhh. Not a programmer though at this point eg. C#,C++,Java,Python though I use PHP for scripting. Ahhh. Yes I like the movie Her (2013) a lot, though I only watch the intro part primarily where he unboxes OS 1.
Even Keras, the third most popular DL library, is given short shrift, even though it is easier to use than TensorFlow, and can handle Theano, TF, CNTK and Deeplearning4j as backends.
All of the missing libs have significant advantages over TensorFlow, which appears to be winning chiefly on Github stars. Notably speed and integrations.
[0] https://deeplearning4j.org/
http://chainer.org/
https://github.com/clab/dynet
https://github.com/PaddlePaddle/Paddle