Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
C++ Python Jupyter Notebook Perl Scala Cuda Other
Latest commit 67bee19 Dec 29, 2016 @piiswrong piiswrong [RELEASE] v0.8 Release (Last release before NNVM refactor (#4409)
* [RELEASE] v0.8 Release (Last release before NNVM refactor

* fix
Permalink
Failed to load latest commit information.
.github Create ISSUE_TEMPLATE.md Dec 8, 2016
R-package [RELEASE] v0.8 Release (Last release before NNVM refactor (#4409) Dec 28, 2016
amalgamation Fix amalgamation scipt. (#4135) Dec 7, 2016
cmake Modifying FindMKL to respect i386 and allow ILP64 (#3762) Nov 9, 2016
dmlc-core @ 15cfc3d New callback interface for training visualization. (#3849) Nov 25, 2016
docker Add CUDA 7.5 and 8.0 Dockerfiles (#4114) Dec 7, 2016
docs [RELEASE] v0.8 Release (Last release before NNVM refactor (#4409) Dec 28, 2016
example [doc] update how_to/perf.md (#4353) Dec 27, 2016
include/mxnet Add go binding project reference (#4336) Dec 23, 2016
make Improve build system to avoid calling pkg-config many times (#4111) Dec 7, 2016
matlab spelling/typo fixes (#3815) Nov 14, 2016
mshadow @ bb0f8c7 New callback interface for training visualization. (#3849) Nov 25, 2016
plugin [RELEASE] v0.8 Release (Last release before NNVM refactor (#4409) Dec 28, 2016
ps-lite @ 4a060e4 quick fix fcn-xs example (#3654) Oct 30, 2016
python [RELEASE] v0.8 Release (Last release before NNVM refactor (#4409) Dec 28, 2016
scala-package [Scala] NDArrayIter constructor fix for null (#4308) Dec 22, 2016
setup-utils Adding How To for visualizing network graphs. Updating install guide … Dec 15, 2016
src Fix underflow bug in GPUPooledStorageManager::Alloc(). (#4356) Dec 24, 2016
tests [DOC][R] add example code to read the original MNIST data set (#4254) Dec 16, 2016
tools change default quality from 80 to 95 for bigger and deeper network (#… Dec 9, 2016
.gitignore Better error messaging and some docs improvements (#3827) Nov 23, 2016
.gitmodules [kvstore] refer to the dev branch of ps-lite Oct 20, 2015
.travis.yml move julia test to jenkins (#3769) Nov 8, 2016
CMakeLists.txt fix cmake build with vs2015+cuda8.0+win7 (#4334) Dec 22, 2016
CONTRIBUTORS.md Fix memory leak bug in Monitor (#4269) Dec 18, 2016
LICENSE Update license year to range Jan 16, 2016
MKL_README.md MKL feature enhance (#4128) Dec 7, 2016
Makefile Improve build system to avoid calling pkg-config many times (#4111) Dec 7, 2016
NEWS.md [RELEASE] v0.8 Release (Last release before NNVM refactor (#4409) Dec 28, 2016
README.md Add go binding project reference (#4336) Dec 23, 2016
appveyor.yml Add AppVeyor script Feb 18, 2016
prepare_mkl.sh MKL feature enhance (#4128) Dec 7, 2016
readthedocs.yml [docs] add favicon and fix index html title Mar 25, 2016

README.md

for Deep Learning

Build Status Documentation Status GitHub license

banner

MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix the flavours of symbolic programming and imperative programming to maximize efficiency and productivity. In its core, a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. The library is portable and lightweight, and it scales to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning system, and interesting insights of DL systems for hackers.

Join the chat at https://gitter.im/dmlc/mxnet

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match good flavours of programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for python, R, C++ and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

Ask Questions

  • Please use mxnet/issues for how to use mxnet and reporting bugs

License

© Contributors, 2015-2016. Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet is initiated and designed in collaboration by the authors of cxxnet, minerva and purine2. The project reflects what we have learnt from the past projects. It combines important flavours of the existing projects for efficiency, flexibility and memory efficiency.