Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
C++ Python Jupyter Notebook Perl Scala Cuda Other
Pull request Compare This branch is 25 commits ahead, 366 commits behind master.
Latest commit e0c7906 Nov 7, 2017 @mbaijal mbaijal committed with cjolivier01 Prep for 0.12.1: Version Updates (#8567)
* Final Changes for 0.12.1

* Prep 0.12.1: Changes

* Initial Changes to NEWS.md
Permalink
Failed to load latest commit information.
.github update issue template (#8082) Oct 11, 2017
R-package Prep for 0.12.1: Version Updates (#8567) Nov 7, 2017
amalgamation Refactor AdaGrad optimizer to support sparse tensors + unary and bina… Oct 15, 2017
benchmark/python/sparse LibsvmIter Doc Updates (#8111) Oct 1, 2017
cmake GPerftools update, also include include/mxnet/*.h as sources for CLion ( Oct 14, 2017
cpp-package Revert "[CMAKE] Fix windows cmake build (#8227)" Oct 17, 2017
cub @ 05eb57f Update cub for CUDA 9 (#7270) Aug 1, 2017
dlpack @ a6e09b5 Change Interface of NDArray & TBlob for DLPack Compatible (#6345) May 30, 2017
dmlc-core @ 595d02c [WIP] New faster version of the RecordIO iterator (#7152) Oct 14, 2017
docker [Perl] Adding Gluon interface to Perl, miscellaneous changes in order… Oct 2, 2017
docker_multiarch Multiplatform docker based builds (#7792) Oct 13, 2017
docs Prep for 0.12.1: Version Updates (#8567) Nov 7, 2017
example Misc fixes for sparse distributed training (#8345) Nov 5, 2017
include/mxnet Prep for 0.12.1: Version Updates (#8567) Nov 7, 2017
make GPerftools update, also include include/mxnet/*.h as sources for CLion ( Oct 14, 2017
matlab Add license header (#7379) Aug 8, 2017
mshadow @ cb5c987 Gpu samplers (#8179) Oct 10, 2017
nnvm @ c86afa8 Revert "[CMAKE] Fix windows cmake build (#8227)" Oct 17, 2017
perl-package increase number of training epochs. (#8214) Oct 11, 2017
plugin Add license header (#7379) Aug 8, 2017
ps-lite @ bdd4c67 update ps lite (#8327) Oct 18, 2017
python Prep for 0.12.1: Version Updates (#8567) Nov 7, 2017
scala-package Prep for 0.12.1: Version Updates (#8567) Nov 7, 2017
setup-utils Prep for 0.12.1: Version Updates (#8567) Nov 7, 2017
src fix makenonlossgrad bug (#8508) Nov 5, 2017
tests bugfix and GPU support for syevd (#8426) Nov 5, 2017
tools Fix a description (#8501) Nov 5, 2017
.gitattributes [R] To ignore R-pkg when releasing on github (#7007) Jul 13, 2017
.gitignore generate op frontend code for IDE auto-complete (#8244) Oct 16, 2017
.gitmodules update cub url (#6625) Jun 9, 2017
.travis.yml Add h5py support to NDArrayIter (#6790) Jul 18, 2017
CMakeLists.txt Revert "[CMAKE] Fix windows cmake build (#8227)" Oct 17, 2017
CODEOWNERS Updating code owners (#8128) Oct 3, 2017
CONTRIBUTORS.md bugfix and GPU support for syevd (#8426) Nov 5, 2017
DISCLAIMER Add DISCLAIMER and lxn2 GPG keys (#7344) Aug 5, 2017
Jenkinsfile Fixing the Caught error (#8199) Oct 13, 2017
KEYS Added my code signing key (#8293) Oct 16, 2017
LICENSE Updating the LICENSE and NOTICE Files (#7563) Aug 23, 2017
MKL_README.md MKL compile update to remove full mkl pack dependency for blas=mkl (#… Feb 16, 2017
Makefile GPerftools update, also include include/mxnet/*.h as sources for CLion ( Oct 14, 2017
NEWS.md Prep for 0.12.1: Version Updates (#8567) Nov 7, 2017
NOTICE Issue #7748: Update the Copyright years in NOTICE file (#8046) Sep 26, 2017
README.md Prep for 0.12.1: Version Updates (#8567) Nov 7, 2017
appveyor.yml Add BLAS3 and LAPACK routines (#6538) Jun 13, 2017
prepare_mkl.sh update mklml and mkl mac support (#7587) Aug 30, 2017
readthedocs.yml [docs] add favicon and fix index html title Mar 25, 2016
snap.python Add snapcraft packaging (#4852) Mar 23, 2017
snapcraft.yaml Prep for 0.12.1: Version Updates (#8567) Nov 7, 2017

README.md

Apache MXNet (incubating) for Deep Learning

Build Status Documentation Status GitHub license

banner

Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Join the chat at https://gitter.im/dmlc/mxnet

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, R, Scala, C++ and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

Ask Questions

  • Please use mxnet/issues for how to use mxnet and reporting bugs

License

Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.