Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
C++ Python Jupyter Notebook Perl Scala Cuda Other
Pull request Compare This branch is 2 commits ahead, 1057 commits behind master.
Latest commit 0768c0e Jul 18, 2017 @szha szha committed with piiswrong fix cub (#7079)
Permalink
Failed to load latest commit information.
.github Create ISSUE_TEMPLATE.md Dec 8, 2016
R-package bump up version number for release (#6462) May 26, 2017
amalgamation Cleanup compilations & tests (#5309) Mar 9, 2017
cmake add mshadow lint (#5970) May 17, 2017
cpp-package fix batchNorm cpp example (#6454) May 26, 2017
cub @ a50619b fix cub (#7079) Jul 17, 2017
dmlc-core @ a6c5701 add mshadow lint (#5970) May 17, 2017
docker Sync with python, Ftrl optimizer, new examples, (#6042) May 1, 2017
docs [R][DOC] update R installation guide (#6457) May 26, 2017
example V0.10.0 fix for dropout and crash issue (#7053) Jul 14, 2017
include/mxnet bump up version number for release (#6462) May 26, 2017
make Caffe without the patch, cpp-package fixed also with Caffe plugin (#5573 Mar 31, 2017
matlab Update Matlab demo script and get it working again with the HEAD of M… Feb 25, 2017
mshadow @ 20b54f0 V0.10.0 fix for dropout and crash issue (#7053) Jul 14, 2017
nnvm @ b279286 refactor ndarray function parsing & add feed_dict to bind for autograd ( Apr 19, 2017
perl-package Fixing typos (#6414) May 24, 2017
plugin Caffe without the patch, cpp-package fixed also with Caffe plugin (#5573 Mar 31, 2017
ps-lite @ acdb698 Fix pslite (#5600) Mar 29, 2017
python bump up version number for release (#6462) May 26, 2017
scala-package bump up version number for release (#6462) May 26, 2017
setup-utils Fixing up issues in install guide (#6463) May 26, 2017
src V0.10.0 fix for dropout and crash issue (#7053) Jul 14, 2017
tests V0.10.0 fix for dropout and crash issue (#7053) Jul 14, 2017
tools Tools of caffe_convert : support 1X7 convolution kernel and pad_w,pad… May 24, 2017
.gitignore Nightly test tutorial (#6447) May 26, 2017
.gitmodules fix cub (#7079) Jul 17, 2017
.travis.yml Added perl docs to the navbar and disabled slow linux travis tests. (#… Mar 28, 2017
CMakeLists.txt V0.10.0 fix for dropout and crash issue (#7053) Jul 14, 2017
CONTRIBUTORS.md Updated CONTRIBUTORS.md (#6315) May 17, 2017
Jenkinsfile Fix ci_test (#6353) May 20, 2017
LICENSE Fixing LICENSE file and adding NOTICE (#6172) May 9, 2017
MKL_README.md MKL compile update to remove full mkl pack dependency for blas=mkl (#… Feb 16, 2017
Makefile Add NNVM .cc .h deps to Makfile (#6416) May 24, 2017
NEWS.md Fixed docs (#5288) Mar 7, 2017
NOTICE Fixing LICENSE file and adding NOTICE (#6172) May 9, 2017
README.md Fixed Broken links at various Documentation files (#6270) May 16, 2017
appveyor.yml Enable warning as error (#4451) Dec 31, 2016
prepare_mkl.sh [CI] Add MKLML build and test (#5419) Mar 16, 2017
readthedocs.yml [docs] add favicon and fix index html title Mar 25, 2016
snap.python Add snapcraft packaging (#4852) Mar 23, 2017
snapcraft.yaml bump up version number for release (#6462) May 26, 2017

README.md

for Deep Learning

Build Status Documentation Status GitHub license

banner

MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Join the chat at https://gitter.im/dmlc/mxnet

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, R, Scala, C++ and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

Ask Questions

  • Please use mxnet/issues for how to use mxnet and reporting bugs

License

© Contributors, 2015-2017. Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.