Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
C++ Python Jupyter Notebook Perl Scala Cuda Other
Pull request Compare This branch is 19 commits ahead, 251 commits behind master.
Latest commit 7c92b98 Dec 12, 2017 @yzhliu yzhliu committed with piiswrong [Vision] add test cases for flip, normalize, to_tensor (#8919)
* [vision] ut for to_tensor, normalize, flip

* [vision] fix flip

* [vision] flip name

* [vision] test non-random flip op

* remove transform.FlipXXXX
Permalink
Failed to load latest commit information.
.github Update PR & Issue Template (#8555) Nov 10, 2017
R-package [R] Initializer fix and adjustments to RNN API (#8121) Nov 11, 2017
amalgamation Restored some copyright attribution that were accidentally removed. (#… Nov 19, 2017
benchmark/python/sparse LibsvmIter Doc Updates (#8111) Oct 1, 2017
cmake Restored some copyright attribution that were accidentally removed. (#… Nov 19, 2017
cpp-package Restored some copyright attribution that were accidentally removed. (#… Nov 19, 2017
cub @ 05eb57f Update cub for CUDA 9 (#7270) Aug 1, 2017
dlpack @ a6e09b5 Change Interface of NDArray & TBlob for DLPack Compatible (#6345) May 30, 2017
dmlc-core @ 87b7ffa multi processing and fork fix (#8677) Nov 16, 2017
docker Restored some copyright attribution that were accidentally removed. (#… Nov 19, 2017
docker_multiarch Multiplatform docker based builds (#7792) Oct 13, 2017
docs Doc src and fix (#8718) Nov 20, 2017
example Revert "2bit gradient compression (#8662)" (#8711) Nov 19, 2017
include/mxnet Revert "2bit gradient compression (#8662)" (#8711) Nov 19, 2017
make support for lapack functions with mkl (#8577) Nov 10, 2017
matlab Add license header (#7379) Aug 8, 2017
mshadow @ 1e1f633 Refactor image operators (#8761) Nov 22, 2017
nnvm @ 8d79cfd [CMAKE] Cmake changes, upgrade training test so it converge (#8343) Oct 27, 2017
perl-package Restored some copyright attribution that were accidentally removed. (#… Nov 19, 2017
plugin Restored some copyright attribution that were accidentally removed. (#… Nov 19, 2017
ps-lite @ bdd4c67 update ps lite (#8327) Oct 18, 2017
python [Vision] add test cases for flip, normalize, to_tensor (#8919) Dec 12, 2017
scala-package Restored some copyright attribution that were accidentally removed. (#… Nov 19, 2017
setup-utils Preparing for 0.12.0.rc0: Final changes before RC (#8301) Oct 17, 2017
src [Vision] add test cases for flip, normalize, to_tensor (#8919) Dec 12, 2017
tests [Vision] add test cases for flip, normalize, to_tensor (#8919) Dec 12, 2017
tools Revert "2bit gradient compression (#8662)" (#8711) Nov 19, 2017
.gitattributes [R] To ignore R-pkg when releasing on github (#7007) Jul 13, 2017
.gitignore bump up version (#8488) Nov 2, 2017
.gitmodules update cub url (#6625) Jun 9, 2017
.travis.yml Add h5py support to NDArrayIter (#6790) Jul 18, 2017
CMakeLists.txt use first class cuda with cmake 3.9 and cuda9.0 support (#8572) Nov 11, 2017
CODEOWNERS Updating code owners (#8128) Oct 3, 2017
CONTRIBUTORS.md bugfix and GPU support for syevd (#8426) Oct 28, 2017
DISCLAIMER Add DISCLAIMER and lxn2 GPG keys (#7344) Aug 5, 2017
Jenkinsfile [EXPERIMENT] increasing timeout to 24hrs. (#8613) Nov 13, 2017
KEYS Added my code signing key (#8293) Oct 16, 2017
LICENSE Updating the LICENSE and NOTICE Files (#7563) Aug 23, 2017
MKL_README.md MKL compile update to remove full mkl pack dependency for blas=mkl (#… Feb 16, 2017
Makefile Add Scala package dev tools for deploy (#8498) Nov 1, 2017
NEWS.md Update NEWS.md for 0.12.1 (#8544) Nov 5, 2017
NOTICE Issue #7748: Update the Copyright years in NOTICE file (#8046) Sep 26, 2017
README.md commiting v12 changess (#8478) Oct 31, 2017
appveyor.yml Add BLAS3 and LAPACK routines (#6538) Jun 13, 2017
prepare_mkl.sh upgrade MKL (#8378) Oct 26, 2017
readthedocs.yml [docs] add favicon and fix index html title Mar 25, 2016
snap.python Add snapcraft packaging (#4852) Mar 23, 2017
snapcraft.yaml bump up version (#8488) Nov 2, 2017

README.md

Apache MXNet (incubating) for Deep Learning

Build Status Documentation Status GitHub license

banner

Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Join the chat at https://gitter.im/dmlc/mxnet

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, R, Scala, C++ and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

Ask Questions

  • Please use mxnet/issues for how to use mxnet and reporting bugs

License

Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.