-
Notifications
You must be signed in to change notification settings - Fork 250
Expand file tree
/
Copy pathindex.dox
More file actions
42 lines (25 loc) · 3.06 KB
/
index.dox
File metadata and controls
42 lines (25 loc) · 3.06 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
namespace FlexFlow {
/**
\mainpage FlexFlow Train
\brief FlexFlow Train is a deep learning framework that accelerates distributed DNN training by automatically searching for efficient parallelization strategies.
\section root-layout Project Layout
The bulk of the FlexFlow source code is stored in the following folders:
- \subpage lib "": The C++ code that makes up FlexFlow's core, split up into a number of libraries.
- \subpage bin "": Command-line interfaces for FlexFlow and associated tools (all in C++). Generally, these are just thin wrappers that parse command-line arguments and then call out to functions defined in \ref lib for the actual processing/logic. You can find a description of each binary \ref bin "here".
- `bindings`: Python (or any additional languages added in the future) bindings for FlexFlow Train. Still mostly unimplemented.
- `docs`: Config files for documentation generators and code for generating diagrams. The actual documentation itself is included in the source directories/files in <a href="https://www.doxygen.nl/manual/index.html">Doxygen</a> syntax either in standalone `.dox` files or inline in header files.
- `cmake`: CMake configuration for building FlexFlow Train. Note that unless you're modifying the build configuration (i.e., adding a library, additional dependencies, etc.), you generally should use \ref contributing-proj "proj" instead of interacting with CMake directly.
\section root-contributing Contributing
Please let us know if you encounter any bugs or have any suggestions by <a href="https://github.com/flexflow/flexflow-train/issues">submitting an issue</a>.
For instructions on how to contribute code to FlexFlow Train, see \subpage contributing.
We welcome all contributions to FlexFlow Train from bug fixes to new features and extensions.
\section root-citations Citations
- Colin Unger, Zhihao Jia, Wei Wu, Sina Lin, Mandeep Baines, Carlos Efrain Quintero Narvaez, Vinay Ramakrishnaiah, Nirmal Prajapati, Pat McCormick, Jamaludin Mohd-Yusof, Xi Luo, Dheevatsa Mudigere, Jongsoo Park, Misha Smelyanskiy, and Alex Aiken. [Unity: Accelerating DNN Training Through Joint Optimization of Algebraic Transformations and Parallelization](https://www.usenix.org/conference/osdi22/presentation/unger). In Proceedings of the Symposium on Operating Systems Design and Implementation (OSDI), July 2022.
- Zhihao Jia, Matei Zaharia, and Alex Aiken. [Beyond Data and Model Parallelism for Deep Neural Networks](https://cs.stanford.edu/~zhihao/papers/sysml19a.pdf). In Proceedings of the 2nd Conference on Machine Learning and Systems (MLSys), April 2019.
- Zhihao Jia, Sina Lin, Charles R. Qi, and Alex Aiken. [Exploring Hidden Dimensions in Parallelizing Convolutional Neural Networks](http://proceedings.mlr.press/v80/jia18a/jia18a.pdf). In Proceedings of the International Conference on Machine Learning (ICML), July 2018.
\section root-team The Team
FlexFlow Train is developed and maintained by teams at CMU, Facebook, Los Alamos National Lab, MIT, Stanford, and UCSD (alphabetically).
\section root-license License
FlexFlow Train uses Apache License 2.0.
*/
}