PyTorch
出典: フリー百科事典『ウィキペディア(Wikipedia)』 (2024/04/17 15:00 UTC 版)
PyTorchは、コンピュータビジョンや自然言語処理で利用されている[2]Torchを元に作られた、Pythonのオープンソースの機械学習ライブラリである[3][4][5]。最初はFacebookの人工知能研究グループAI Research lab(FAIR)により開発された[6][7][8]。PyTorchはフリーでオープンソースのソフトウェアであり、修正BSDライセンスで公開されている。
- ^ "Release 2.2.2"; 閲覧日: 2024年4月1日; 出版日: 2024年3月27日.
- ^ “Natural Language Processing (NLP) with PyTorch — NLP with PyTorch documentation” (英語). dl4nlp.info. 2019年3月30日閲覧。
- ^ Yegulalp, Serdar (2017年1月19日). “Facebook brings GPU-powered machine learning to Python”. InfoWorld 2019年3月30日閲覧。
- ^ Lorica, Ben (2017年8月3日). “Why AI and machine learning researchers are beginning to embrace PyTorch”. O'Reilly Media. 2019年3月30日閲覧。
- ^ Ketkar, Nikhil (2017). “Introduction to PyTorch” (英語). Deep Learning with Python. Apress, Berkeley, CA. pp. 195–208. doi:10.1007/978-1-4842-2766-4_12. ISBN 9781484227657
- ^ Patel, Mo (2017年12月7日). “When two trends fuse: PyTorch and recommender systems” (英語). O'Reilly Media 2019年3月30日閲覧。
- ^ Mannes, John. “Facebook and Microsoft collaborate to simplify conversions from PyTorch to Caffe2” (英語). TechCrunch 2019年3月30日閲覧. "FAIR is accustomed to working with PyTorch — a deep learning framework optimized for achieving state of the art results in research, regardless of resource constraints. Unfortunately in the real world, most of us are limited by the computational capabilities of our smartphones and computers."
- ^ Arakelyan, Sophia (2017年11月29日). “Tech giants are using open source frameworks to dominate the AI community” (英語). VentureBeat. 2019年3月30日閲覧。
- ^ “Uber AI Labs Open Sources Pyro, a Deep Probabilistic Programming Language” (英語). Uber Engineering Blog. (2017年11月3日) 2017年12月18日閲覧。
- ^ PYTORCH-TRANSFORMERS: PyTorch implementations of popular NLP Transformers, PyTorch Hub, (2019-12-01) 2019年12月1日閲覧。
- ^ GitHub - catalyst-team/catalyst: Accelerated DL & RL, Catalyst-Team, (2019-12-05) 2019年12月5日閲覧。
- ^ “PyTorch” (英語). www.pytorch.org. 2019年12月5日閲覧。
- ^ a b “PyTorch – About”. pytorch.org. 2018年6月15日時点のオリジナルよりアーカイブ。2018年6月11日閲覧。
- ^ Synced (2018年4月2日). “Caffe2 Merges With PyTorch”. 2019年3月30日閲覧。
- ^ Chainer Team (2010年12月5日). “Chainer/CuPy v7のリリースと今後の開発体制について”. Chainer Blog. 2020年8月10日閲覧。
- ^ a b c d PyTorch 2.1: automatic dynamic shape compilation, distributed checkpointing
- ^ “An Introduction to PyTorch – A Simple yet Powerful Deep Learning Library”. analyticsvidhya.com (2018年2月22日). 2018年6月11日閲覧。
- ^ a b c d torch.tensor PyTorch Foundation
- ^ Autograd PyTorch Foundation
- ^ Complex Numbers PyTorch Foundation
- ^ a b Quantization PyTorch Foundation
- ^ torch.Tensor — PyTorch 2.3 documentation PyTorch Foundation
- ^ Using FP8 with Transformer Engine NVIDIA
- ^ NVIDIA Ampereにおけるプルーニング対応の特徴 Impress 2020年7月20日
- ^ Sparse Semi-Structured Tensors PyTorch Foundation
- ^ a b c d e Module PyTorch Foundation
- ^ “The C++ Frontend”. PyTorch Master Documentation. 2019年7月29日閲覧。
- ^ Tensor Basics PyTorch Foundation
- ^ a b c d PyTorch 2.0 PyTorch Foundation
- ^ Installation PyTorch Foundation
- ^ Torch-TensorRT で PyTorch の推論を最大 6 倍高速化 NVIDIA 2021年12月2日
- ^ DeepSpeed-MII: instant speedup on 24,000+ open-source DL models with up to 40x cheaper inference Microsoft 2022年10月10日
- ^ DeepSpeed Deep Dive — Model Implementations for Inference (MII) Towards Data Science 2022年11月17日
- ^ Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.13.0+cu117 documentation PyTorch Foundation
- ^ Introducing PyTorch Fully Sharded Data Parallel (FSDP) API PyTorch Foundation 2022年3月14日
- ^ ZeRO Microsoft
- ^ Efficient Memory management - FairScale documentation Facebook Research
- ^ ZeRO++ Microsoft
- ^ ZeRO-Infinity and DeepSpeed: Unlocking unprecedented model scale for deep learning training Microsoft
- ^ ZeRO-Offload Microsoft
- ^ a b Pipeline Parallelism PyTorch Foundation
- ^ Pipeline Parallelism - FairScale documentation Facebook Research
- ^ PiPPy PyTorch Foundation
- ^ Pipeline Parallelism - DeepSpeed Microsoft
- ^ PETALS: Collaborative Inference and Fine-tuning of Large Models p.2 Alexander Borzunov et al. 2023年
- ^ FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU p.6 Ying Sheng et al. 2023年
- ^ Tensor Parallelism - torch.distributed.tensor.parallel PyTorch Foundation
- ^ Model Parallelism — transformers 4.11.3 documentation Hugging Face
- ^ a b Mixture of Experts - DeepSpeed Microsoft
- ^ Handling big models for inference HuggingFace
- ^ How 🤗 Accelerate runs very large models thanks to PyTorch HuggingFace 2022年9月27日
- ^ ZeRO-Inference: Democratizing massive model inference Microsoft 2022年9月9日
- ^ Docs > CUDA semantics - CUDA streams PyTorch Foundation
- ^ a b Pickle Scanning HuggingFace
- ^ Python pickle serialization format: format specification Kaitai Project
- ^ torch.onnx PyTorch Foundation
- ^ "TorchScript is a way to create serializable and optimizable models from PyTorch code." PyTorch. TorchScript.
- ^ "TorchScript is a statically typed subset of Python" PyTorch. TorchScript Language Reference.
- ^ "TorchScript program that can be run independently from Python, such as in a standalone C++ program." PyTorch. TorchScript.
- ^ "In PyTorch 1.10, we’ve added an LLVM-based JIT compiler for CPUs that can fuse together sequences of
torch
library calls" PyTorch. PyTorch 1.10 Release. - ^ Saving and Loading Models — PyTorch Tutorials 1.12.1+cu102 documentation PyTorch Foundation
- ^ torch.onnx — PyTorch 1.13 documentation PyTorch Foundation
- ^ My spring internship – torch-mlir eager mode, OPT and blowing away the main git repo nod.ai 2022年6月20日
- ^ "TorchScript is executed using an interpreter attached to a JIT-optimizer and compiler." PyTorch. JIT Technical Overview. 2022-03-19閲覧.
- ^ "The executor specializes the
Graph
for the particular set of inputs." PyTorch. JIT Technical Overview. 2022-03-19閲覧. - ^ "It propagates constants, pre-computing as much as possible" PyTorch. JIT Technical Overview. 2022-03-19閲覧.
- ^ "Eliminating dead code" PyTorch. JIT Technical Overview. 2022-03-19閲覧.
- ^ "Scripting a function or
nn.Module
will inspect the source code, compile it as TorchScript code using the TorchScript compiler, and return aScriptModule
orScriptFunction
." PyTorch. TORCH.JIT.SCRIPT. pytorch.org. 2023-08-29閲覧. - ^ "Using torch.jit.trace and torch.jit.trace_module, you can turn an existing module or Python function into a TorchScript
ScriptFunction
orScriptModule
. You must provide example inputs, and we run the function, recording the operations performed on all the tensors." Python. TORCH.JIT.TRACE. pytorch.org. 2023-08-29閲覧. - ^ "Tracing will not record any control-flow like if-statements or loops." Python. TORCH.JIT.TRACE. pytorch.org. 2023-08-29閲覧.
- ^ "FX is a toolkit for developers to use to transform
nn.Module
instances." PyTorch. TORCH.FX. 2022-03-23閲覧. - ^ "torch.fx, a program capture and transformation library for PyTorch" Reed, et al. (2021). Torch.fx: Practical Program Capture and Transformation for Deep Learning in Python. arxiv.
- ^ "torch.fx represents programs in a DAG-based IR" Reed, et al. (2021). Torch.fx: Practical Program Capture and Transformation for Deep Learning in Python. arxiv.
- ^ "FX consists of three main components: a symbolic tracer, an intermediate representation, and Python code generation." PyTorch. TORCH.FX. 2022-03-23閲覧.
- ^ "graph-based ... IR ... Program transformations ... is as simple embedded programming languages that are meta-programmed from a host language, predominantly Python" Reed, et al. (2021). Torch.fx: Practical Program Capture and Transformation for Deep Learning in Python. arxiv.
- ^ "torch.fx provides an fx.graph_drawer package, which gives the user the ability to visualize torch.fx graphs with Graphviz" Reed, et al. (2021). Torch.fx: Practical Program Capture and Transformation for Deep Learning in Python. arxiv.
- ^ "Quantization makes use of torch.fx’s graph and GraphModule representation to simultaneously modify the program code and weight values." Reed, et al. (2021). Torch.fx: Practical Program Capture and Transformation for Deep Learning in Python. arxiv.
- ^ "6.2.2 Fusion Optimizations ... torch.fx provides the necessary non-local program context and state modification facilities needed for this transformation with its ahead-of-time, graph-based nature" Reed, et al. (2021). Torch.fx: Practical Program Capture and Transformation for Deep Learning in Python. arxiv.
- ^ " The project was quickly developed using torch.fx’s Python APIs as well as TensorRT’s Python APIs, creating a translation layer between the two." Reed, et al. (2021). Torch.fx: Practical Program Capture and Transformation for Deep Learning in Python. arxiv.
- ^ "the purposes of serialization or export. For instance, TorchScript" Reed, et al. (2021). Torch.fx: Practical Program Capture and Transformation for Deep Learning in Python. arxiv.
- ^ functorch.compile (experimental) — functorch 1.13 documentation PyTorch Foundation
- ^ functorch.compile.ts_compile — functorch 1.13 documentation PyTorch Foundation
- ^ TorchDynamo(torch.compile) integration in PyTorch XLA PyTorch Foundation
- ^ a b The Path to Achieve Ultra-Low Inference Latency With LLaMA 65B on PyTorch/XLA PyTorch Foundation
- ^ Quantization PyTorch Foundation
- ^ a b c PyTorch Mobile PyTorch Foundation
- ^ a b ExecuTorch PyTorch Foundation
- ^ torch.utils.mobile_optimizer PyTorch Foundation
- ^ torch.nn - Transformer Layers PyTorch Foundation
- ^ New Releases: PyTorch 1.2, torchtext 0.4, torchaudio 0.3, and torchvision 0.4 PyTorch Foundation 2019年8月8日
- ^ a b c PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever PyTorch Foundation 2023年3月15日
- ^ A BetterTransformer for Fast Transformer Inference PyTorch Foundation 2022年7月12日
- ^ BetterTransformer - Overview Hugging Face
- ^ Transformer for PyTorch NVIDIA
- ^ Fairseq Meta AI
- ^ 動画も音楽もゲームも 広がるAIエンターテインメントの可能性 ITmedia 2022年11月11日
- ^ AltDiffusion Hugging Face
- ^ a b Models and pre-trained weights — Torchvision main documentation PyTorch Foundation
- ^ a b c d e f g New Library Updates in PyTorch 2.1 PyTorch Foundation 2023年10月4日
- ^ torchaudio.pipelines PyTorch Foundation
- ^ SST-2 Binary text classification with XLM-RoBERTa model PyTorch Foundation
- ^ T5-Base Model for Summarization, Sentiment Classification, and Translation PyTorch Foundation
- ^ torchtext.transforms PyTorch Foundation
- ^ Summary of the tokenizers Hugging Face
- ^ torchtext.vocab PyTorch Foundation
- ^ TorchRL PyTorch Foundation
- ^ a b torch.hub — PyTorch 1.13 documentation PyTorch Foundation
- ^ fairseq · PyPI Facebook
- ^ PyTorch Hub Ultralytics
- ^ Hugging Face Introduces Private Hub Weights & Biases 2022年8月3日
- ^ Integrate your library with the Hub Hugging Face
- ^ PyTorch 1.12: TorchArrow, Functional API for Modules and nvFuser, are now available PyTorch Foundation 2022年6月28日
- ^ a b NVIDIA DALI Documentation NVIDIA
- ^ "
torch.fx.symbolic_trace()
... can’t handle control flow" PyTorch. Why do you need another way of optimizing PyTorch code? - Getting Started - TorchDynamo Overview. pytorch.org. 2023-08-28閲覧.
- PyTorchのページへのリンク