Recurrent neural networkとは? わかりやすく解説

Weblio 辞書 > 辞書・百科事典 > デジタル大辞泉 > Recurrent neural networkの意味・解説 

アール‐エヌ‐エヌ【RNN】

読み方:あーるえぬえぬ

《recurrent neural network》⇒リカレントニューラルネットワーク


リカレント‐ニューラルネットワーク【recurrent neural network】


さいきがた‐ニューラルネットワーク【再帰型ニューラルネットワーク】

読み方:さいきがたにゅーらるねっとわーく

《recurrent neural network》⇒リカレントニューラルネットワーク


回帰型ニューラルネットワーク

(Recurrent neural network から転送)

出典: フリー百科事典『ウィキペディア(Wikipedia)』 (2023/04/09 23:53 UTC 版)

回帰型ニューラルネットワーク(かいきがたニューラルネットワーク、: Recurrent neural network; RNN)は内部に循環をもつニューラルネットワークの総称・クラスである[1]


  1. ^ a b "If a network has one or more cycles, that is, if it is possible to follow a path from a unit back to itself, then the network is referred to as recurrent." Jordan, M.I. (1986). Serial order: A parallel distributed processing approach. (Tech. Rep. No. 8604). San Diego: University of California, Institute for Cognitive Science.
  2. ^ Jinyu Li Li Deng Reinhold Haeb-Umbach Yifan Gong (2015). Robust Automatic Speech Recognition. Academic Press. ISBN 978-0128023983 
  3. ^ Graves, A.; Liwicki, M.; Fernandez, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (2009). “A Novel Connectionist System for Improved Unconstrained Handwriting Recognition” (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence 31 (5). http://www.idsia.ch/~juergen/tpami_2008.pdf. 
  4. ^ a b Sak, Hasim (2014年). “Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling”. 2019年4月5日閲覧。
  5. ^ a b Li, Xiangang; Wu, Xihong (15 October 2014). "Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition". arXiv:1410.4281 [cs.CL]。
  6. ^ Miljanovic, Milos (Feb-Mar 2012). “Comparative analysis of Recurrent and Finite Impulse Response Neural Networks in Time Series Prediction”. Indian Journal of Computer and Engineering 3 (1). http://www.ijcse.com/docs/INDJCSE12-03-01-028.pdf. 
  7. ^ 岡谷 2015, pp. 112
  8. ^ 渡辺太郎「ニューラルネットワークによる構造学習の発展」『人工知能』第31巻第2号、202--209頁、NAID 110010039602 
  9. ^ Williams, Ronald J.; Hinton, Geoffrey E.; Rumelhart, David E. (October 1986). “Learning representations by back-propagating errors”. Nature 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687. https://www.nature.com/articles/323533a0. 
  10. ^ a b Schmidhuber, Jürgen (1993). Habilitation thesis: System modeling and optimization. ftp://ftp.idsia.ch/pub/juergen/habilitation.pdf  Page 150 ff demonstrates credit assignment across the equivalent of 1,200 layers in an unfolded RNN.
  11. ^ Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). “An Application of Recurrent Neural Networks to Discriminative Keyword Spotting”. Proceedings of the 17th International Conference on Artificial Neural Networks. ICANN'07 (Berlin, Heidelberg: Springer-Verlag): 220–229. ISBN 978-3-540-74693-5. http://dl.acm.org/citation.cfm?id=1778066.1778092. 
  12. ^ a b c Schmidhuber, Jürgen (January 2015). “Deep Learning in Neural Networks: An Overview”. Neural Networks 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID 25462637. 
  13. ^ Graves, Alex; Schmidhuber, Jürgen (2009). Bengio, Yoshua. ed. “Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks”. Neural Information Processing Systems (NIPS) Foundation: 545–552. https://papers.nips.cc/paper/3449-offline-handwriting-recognition-with-multidimensional-recurrent-neural-networks. 
  14. ^ Hannun, Awni; Case, Carl; Casper, Jared; Catanzaro, Bryan; Diamos, Greg; Elsen, Erich; Prenger, Ryan; Satheesh, Sanjeev; Sengupta, Shubho (17 December 2014). "Deep Speech: Scaling up end-to-end speech recognition". arXiv:1412.5567 [cs.CL]。
  15. ^ Bo Fan, Lijuan Wang, Frank K. Soong, and Lei Xie (2015). Photo-Real Talking Head with Deep Bidirectional LSTM. In Proceedings of ICASSP 2015.
  16. ^ Zen, Heiga (2015年). “Unidirectional Long Short-Term Memory Recurrent Neural Network with Recurrent Output Layer for Low-Latency Speech Synthesis”. Google.com. ICASSP. pp. 4470–4474. 2019年4月5日閲覧。
  17. ^ Sak, Haşim (2015年9月). “Google voice search: faster and more accurate”. 2019年4月5日閲覧。
  18. ^ Sutskever, L.; Vinyals, O.; Le, Q. (2014). “Sequence to Sequence Learning with Neural Networks”. Electronic Proceedings of the Neural Information Processing Systems Conference 27: 5346. arXiv:1409.3215. Bibcode2014arXiv1409.3215S. https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf. 
  19. ^ Jozefowicz, Rafal; Vinyals, Oriol; Schuster, Mike; Shazeer, Noam; Wu, Yonghui (7 February 2016). "Exploring the Limits of Language Modeling". arXiv:1602.02410 [cs.CL]。
  20. ^ Gillick, Dan; Brunk, Cliff; Vinyals, Oriol; Subramanya, Amarnag (30 November 2015). "Multilingual Language Processing From Bytes". arXiv:1512.00103 [cs.CL]。
  21. ^ Vinyals, Oriol; Toshev, Alexander; Bengio, Samy; Erhan, Dumitru (17 November 2014). "Show and Tell: A Neural Image Caption Generator". arXiv:1411.4555 [cs.CV]。
  22. ^ a b Cruse, Holk; Neural Networks as Cybernetic Systems, 2nd and revised edition
  23. ^ Elman, Jeffrey L. (1990). “Finding Structure in Time”. Cognitive Science 14 (2): 179–211. doi:10.1016/0364-0213(90)90002-E. 
  24. ^ Jordan, Michael I. (1997-01-01). Serial Order: A Parallel Distributed Processing Approach. Neural-Network Models of Cognition. 121. 471–495. doi:10.1016/s0166-4115(97)80111-2. ISBN 9780444819314 
  25. ^ Kosko, B. (1988). “Bidirectional associative memories”. IEEE Transactions on Systems, Man, and Cybernetics 18 (1): 49–60. doi:10.1109/21.87054. 
  26. ^ Rakkiyappan, R.; Chandrasekar, A.; Lakshmanan, S.; Park, Ju H. (2 January 2015). “Exponential stability for markovian jumping stochastic BAM neural networks with mode-dependent probabilistic time-varying delays and impulse control”. Complexity 20 (3): 39–65. Bibcode2015Cmplx..20c..39R. doi:10.1002/cplx.21503. 
  27. ^ Rául Rojas (1996). Neural networks: a systematic introduction. Springer. p. 336. ISBN 978-3-540-60505-8. https://books.google.com/books?id=txsjjYzFJS4C&pg=PA336 
  28. ^ Jaeger, Herbert; Haas, Harald (2004-04-02). “Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication”. Science 304 (5667): 78–80. Bibcode2004Sci...304...78J. doi:10.1126/science.1091277. PMID 15064413. 
  29. ^ W. Maass, T. Natschläger, and H. Markram (2002). “A fresh look at real-time computation in generic recurrent neural circuits”. Technical report, Institute for Theoretical Computer Science (TU Graz). http://www.lsm.tugraz.at/papers/lsm-fresh-148.pdf. 
  30. ^ a b Li, Shuai; Li, Wanqing; Cook, Chris; Zhu, Ce; Yanbo, Gao (2018). “Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN”. IEEE Conference on Computer Vision and Pattern Recognition. arXiv:1803.04831. 
  31. ^ Goller, C.; Küchler, A. (1996). Learning task-dependent distributed representations by backpropagation through structure. 1. 347. doi:10.1109/ICNN.1996.548916. ISBN 978-0-7803-3210-2. https://pdfs.semanticscholar.org/794e/6ed81d21f1bf32a0fd3be05c44c1fa362688.pdf 
  32. ^ Seppo Linnainmaa (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7.
  33. ^ Griewank, Andreas; Walther, Andrea (2008). Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation (Second ed.). SIAM. ISBN 978-0-89871-776-1. https://books.google.com/books?id=xoiiLaRxcbEC 
  34. ^ Socher, Richard; Lin, Cliff; Ng, Andrew Y.; Manning, Christopher D., “Parsing Natural Scenes and Natural Language with Recursive Neural Networks”, 28th International Conference on Machine Learning (ICML 2011), http://ai.stanford.edu/~ang/papers/icml11-ParsingWithRecursiveNeuralNetworks.pdf 
  35. ^ Socher, Richard; Perelygin, Alex; Y. Wu, Jean; Chuang, Jason; D. Manning, Christopher; Y. Ng, Andrew; Potts, Christopher. “Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank”. Emnlp 2013. http://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf. 
  36. ^ a b c d Schmidhuber, Jürgen (1992). “Learning complex, extended sequences using the principle of history compression”. Neural Computation 4 (2): 234–242. doi:10.1162/neco.1992.4.2.234. ftp://ftp.idsia.ch/pub/juergen/chunker.pdf. 
  37. ^ Schmidhuber, Jürgen (2015). “Deep Learning”. Scholarpedia 10 (11): 32832. Bibcode2015SchpJ..1032832S. doi:10.4249/scholarpedia.32832. http://www.scholarpedia.org/article/Deep_Learning#Fundamental_Deep_Learning_Problem_and_Unsupervised_Pre-Training_of_RNNs_and_FNNs. 
  38. ^ a b c Sepp Hochreiter (1991), Untersuchungen zu dynamischen neuronalen Netzen, Diploma thesis. Institut f. Informatik, Technische Univ. Munich. Advisor: J. Schmidhuber.
  39. ^ C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee, "Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks", Neural Computation, 4(3), p. 393, 1992.
  40. ^ C.W. Omlin, C.L. Giles, "Constructing Deterministic Finite-State Automata in Recurrent Neural Networks" Journal of the ACM, 45(6), 937-972, 1996.
  41. ^ Gers, Felix; Schraudolph, Nicol N.; Schmidhuber, Jürgen (2000). “Learning Precise Timing with LSTM Recurrent Networks (PDF Download Available)”. Crossref Listing of Deleted Dois 1. doi:10.1162/153244303768966139. https://www.researchgate.net/publication/220320057 2019年4月5日閲覧。. 
  42. ^ Bayer, Justin; Wierstra, Daan; Togelius, Julian; Schmidhuber, Jürgen (2009-09-14). Evolving Memory Cell Structures for Sequence Learning. Lecture Notes in Computer Science. 5769. Springer, Berlin, Heidelberg. 755–764. doi:10.1007/978-3-642-04277-5_76. ISBN 978-3-642-04276-8. http://mediatum.ub.tum.de/doc/1289041/document.pdf 
  43. ^ Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). “Sequence labelling in structured domains with hierarchical recurrent neural networks”. Proc. 20th Int. Joint Conf. On Artificial In℡ligence, Ijcai 2007: 774–779. 
  44. ^ Graves, Alex; Fernández, Santiago; Gomez, Faustino (2006). “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks”. In Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. 
  45. ^ Gers, F. A.; Schmidhuber, E. (November 2001). “LSTM recurrent networks learn simple context-free and context-sensitive languages”. IEEE Transactions on Neural Networks 12 (6): 1333–1340. doi:10.1109/72.963769. ISSN 1045-9227. PMID 18249962. http://ieeexplore.ieee.org/document/963769/. 
  46. ^ Heck, Joel; Salem, Fathi M. (12 January 2017). "Simplified Minimal Gated Unit Variations for Recurrent Neural Networks". arXiv:1701.03452 [cs.NE]。
  47. ^ Dey, Rahul; Salem, Fathi M. (20 January 2017). "Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks". arXiv:1701.05923 [cs.NE]。
  48. ^ Chung, Junyoung; Gulcehre, Caglar; Cho, KyungHyun; Bengio, Yoshua (2014). "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling". arXiv:1412.3555 [cs.NE]。
  49. ^ Recurrent Neural Network Tutorial, Part 4 – Implementing a GRU/LSTM RNN with Python and Theano – WildML” (2015年10月27日). 2019年4月5日閲覧。
  50. ^ Graves, Alex; Schmidhuber, Jürgen (2005-07-01). “Framewise phoneme classification with bidirectional LSTM and other neural network architectures”. Neural Networks. IJCNN 2005 18 (5): 602–610. doi:10.1016/j.neunet.2005.06.042. PMID 16112549. 
  51. ^ Thireou, T.; Reczko, M. (July 2007). “Bidirectional Long Short-Term Memory Networks for Predicting the Subcellular Localization of Eukaryotic Proteins”. IEEE/ACM Transactions on Computational Biology and Bioinformatics 4 (3): 441–446. doi:10.1109/tcbb.2007.1015. 
  52. ^ Harvey, Inman; Husbands, P.; Cliff, D. (1994), “Seeing the light: Artificial evolution, real vision”, 3rd international conference on Simulation of adaptive behavior: from animals to animats 3, pp. 392–401, https://www.researchgate.net/publication/229091538_Seeing_the_Light_Artificial_Evolution_Real_Vision 
  53. ^ Quinn, Matthew (2001). Evolving communication without dedicated communication channels. Lecture Notes in Computer Science. 2159. 357–366. doi:10.1007/3-540-44811-X_38. ISBN 978-3-540-42567-0 
  54. ^ Beer, R.D. (1997). “The dynamics of adaptive behavior: A research program”. Robotics and Autonomous Systems 20 (2–4): 257–289. doi:10.1016/S0921-8890(96)00063-2. 
  55. ^ Paine, Rainer W.; Tani, Jun (2005-09-01). “How Hierarchical Control Self-organizes in Artificial Adaptive Systems”. Adaptive Behavior 13 (3): 211–225. doi:10.1177/105971230501300303. 
  56. ^ Recurrent Multilayer Perceptrons for Identification and Control: The Road to Applications. (1995) 
  57. ^ Yamashita, Yuichi; Tani, Jun (2008-11-07). “Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment”. PLOS Computational Biology 4 (11): e1000220. Bibcode2008PLSCB...4E0220Y. doi:10.1371/journal.pcbi.1000220. PMC 2570613. PMID 18989398. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2570613/. 
  58. ^ Shibata Alnajjar, Fady; Yamashita, Yuichi; Tani, Jun (2013). “The hierarchical and functional connectivity of higher-order cognitive mechanisms: neurorobotic model to investigate the stability and flexibility of working memory”. Frontiers in Neurorobotics 7: 2. doi:10.3389/fnbot.2013.00002. PMC 3575058. PMID 23423881. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3575058/. 
  59. ^ Graves, Alex; Wayne, Greg; Danihelka, Ivo (2014). "Neural Turing Machines". arXiv:1410.5401 [cs.NE]。
  60. ^ Sun, Guo-Zheng; Giles, C. Lee; Chen, Hsing-Hen (1998). “The Neural Network Pushdown Automaton: Architecture, Dynamics and Training”. In Giles, C. Lee. Adaptive Processing of Sequences and Data Structures. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 296–345. doi:10.1007/bfb0054003. ISBN 9783540643418 
  61. ^ "letting μ be the value of the recurrent weight, and assuming for simplicity that the units are linear ..., the activation of the output unit at time t is given by " Jordan, M.I. (1986). Serial order: A parallel distributed processing approach. (Tech. Rep. No. 8604). San Diego: University of California, Institute for Cognitive Science.
  62. ^ "popular RNN models are nonlinear sequence models with activation functions between each time step." Albert Gu, et al. (2021). Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers.
  63. ^ Albert Gu, et al. (2020). HiPPO: Recurrent Memory with Optimal Polynomial Projections. NeurIPS 2020.
  64. ^ "LSSLs are recurrent. ... LSSL can be discretized into a linear recurrence ... as a stateful recurrent model" Albert Gu, et al. (2021). Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers.
  65. ^ Albert Gu, et al. (2021). Efficiently Modeling Long Sequences with Structured State Spaces.
  66. ^ Werbos, Paul J. (1988). “Generalization of backpropagation with application to a recurrent gas market model”. Neural Networks 1 (4): 339–356. doi:10.1016/0893-6080(88)90007-x. http://linkinghub.elsevier.com/retrieve/pii/089360808890007X. 
  67. ^ Rumelhart, David E. (1985). Learning Internal Representations by Error Propagation. Institute for Cognitive Science, University of California, San Diego. https://books.google.com/books?id=Ff9iHAAACAAJ 
  68. ^ Robinson, A. J. (1987). The Utility Driven Dynamic Error Propagation Network. Technical Report CUED/F-INFENG/TR.1. University of Cambridge Department of Engineering. https://books.google.com/books?id=6JYYMwEACAAJ 
  69. ^ Williams, R. J.; Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity, D. (1 February 2013). Backpropagation: Theory, Architectures, and Applications. Psychology Press. ISBN 978-1-134-77581-1. https://books.google.com/books?id=B71nu3LDpREC 
  70. ^ SCHMIDHUBER, JURGEN (1989-01-01). “A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks”. Connection Science 1 (4): 403–412. doi:10.1080/09540098908915650. 
  71. ^ Príncipe, José C.; Euliano, Neil R.; Lefebvre, W. Curt (2000). Neural and adaptive systems: fundamentals through simulations. Wiley. ISBN 978-0-471-35167-2. https://books.google.com/books?id=jgMZAQAAIAAJ 
  72. ^ Yann, Ollivier; Corentin, Tallec; Guillaume, Charpiat (28 July 2015). "Training recurrent networks online without backtracking". arXiv:1507.07680 [cs.NE]。
  73. ^ Schmidhuber, Jürgen (1992-03-01). “A Fixed Size Storage O(n3) Time Complexity Learning Algorithm for Fully Recurrent Continually Running Networks”. Neural Computation 4 (2): 243–248. doi:10.1162/neco.1992.4.2.243. 
  74. ^ Williams, R. J. (1989). Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report Technical Report NU-CCS-89-27. Boston: Northeastern University, College of Computer Science. http://citeseerx.ist.psu.edu/showciting?cid=128036. 
  75. ^ Pearlmutter, Barak A. (1989-06-01). “Learning State Space Trajectories in Recurrent Neural Networks”. Neural Computation 1 (2): 263–269. doi:10.1162/neco.1989.1.2.263. https://doi.org/10.1162/neco.1989.1.2.263. 
  76. ^ Hochreiter, S. (15 January 2001). “Gradient flow in recurrent nets: the difficulty of learning long-term dependencies”. A Field Guide to Dynamical Recurrent Networks. John Wiley & Sons. ISBN 978-0-7803-5369-5. https://books.google.com/books?id=NWOcMVA64aAC 
  77. ^ Hochreiter, Sepp; Schmidhuber, Jürgen (1997-11-01). “Long Short-Term Memory”. Neural Computation 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. 
  78. ^ Campolucci; Uncini, A.; Piazza, F.; Rao, B. D. (1999). “On-Line Learning Algorithms for Locally Recurrent Neural Networks”. IEEE Transactions on Neural Networks 10 (2): 253–271. doi:10.1109/72.750549. PMID 18252525. 
  79. ^ Wan, E. A.; Beaufays, F. (1996). “Diagrammatic derivation of gradient algorithms for neural networks”. Neural Computation 8: 182–201. doi:10.1162/neco.1996.8.1.182. https://doi.org/10.1162/neco.1996.8.1.182. 
  80. ^ a b Campolucci, P.; Uncini, A.; Piazza, F. (2000). “A Signal-Flow-Graph Approach to On-line Gradient Calculation”. Neural Computation 12 (8): 1901–1927. doi:10.1162/089976600300015196. 
  81. ^ Gomez, F. J.; Miikkulainen, R. (1999), “Solving non-Markovian control tasks with neuroevolution”, IJCAI 99, Morgan Kaufmann, http://www.cs.utexas.edu/users/nn/downloads/papers/gomez.ijcai99.pdf 2019年4月5日閲覧。 
  82. ^ Applying Genetic Algorithms to Recurrent Neural Networks for Learning Network Parameters and Architecture”. 2019年4月5日閲覧。
  83. ^ Gomez, Faustino; Schmidhuber, Jürgen; Miikkulainen, Risto (June 2008). “Accelerated Neural Evolution Through Cooperatively Coevolved Synapses”. J. Mach. Learn. Res. 9: 937–965. http://dl.acm.org/citation.cfm?id=1390681.1390712. 
  84. ^ "Copying task. This standard RNN task ... directly tests memorization, where models must regurgitate a sequence of tokens seen at the beginning of the sequence." Gu, et al. (2020). HiPPO: Recurrent Memory with Optimal Polynomial Projections.
  85. ^ " the first 10 tokens (a0, a1, . . . , a9) are randomly chosen from {1, . . . , 8}, the middle N tokens are set to 0, and the last ten tokens are 9. The goal of the recurrent model is to output (a0, . . . , a9) in order on the last 10 time steps, whenever the cue token 9 is presented." Gu, et al. (2020). HiPPO: Recurrent Memory with Optimal Polynomial Projections.
  86. ^ "ability to recall exactly data seen a long time ago." Arjovsky, et al. (2015). Unitary Evolution Recurrent Neural Networks.
  87. ^ Figure 1 of Arjovsky, et al. (2015). Unitary Evolution Recurrent Neural Networks.
  88. ^ Figure 7 of Gu, et al. (2020). HiPPO: Recurrent Memory with Optimal Polynomial Projections.
  89. ^ Siegelmann, Hava T.; Horne, Bill G.; Giles, C. Lee (1995). Computational Capabilities of Recurrent NARX Neural Networks. University of Maryland. https://books.google.com/books?id=830-HAAACAAJ&pg=PA208 
  90. ^ Mayer, H.; Gomez, F.; Wierstra, D.; Nagy, I.; Knoll, A.; Schmidhuber, J. (October 2006). A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks. 543–548. doi:10.1109/IROS.2006.282190. ISBN 978-1-4244-0258-8. http://ieeexplore.ieee.org/document/4059310/ 
  91. ^ Wierstra, Daan; Schmidhuber, J.; Gomez, F. J. (2005). “Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning”. Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh: 853–858. https://www.academia.edu/5830256. 
  92. ^ Graves, A.; Schmidhuber, J. (2005). “Framewise phoneme classification with bidirectional LSTM and other neural network architectures”. Neural Networks 18 (5–6): 602–610. doi:10.1016/j.neunet.2005.06.042. PMID 16112549. 
  93. ^ Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). An Application of Recurrent Neural Networks to Discriminative Keyword Spotting. ICANN'07. Berlin, Heidelberg: Springer-Verlag. 220–229. ISBN 978-3540746935. http://dl.acm.org/citation.cfm?id=1778066.1778092 
  94. ^ Graves, Alex; Mohamed, Abdel-rahman; Hinton, Geoffrey (2013). “Speech Recognition with Deep Recurrent Neural Networks”. Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on: 6645–6649. 
  95. ^ Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautam; Agarwal, Puneet (April 2015). “Long Short Term Memory Networks for Anomaly Detection in Time Series”. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning — ESANN 2015. https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2015-56.pdf. 
  96. ^ Gers, F.; Schraudolph, N.; Schmidhuber, J. (2002). “Learning precise timing with LSTM recurrent networks”. Journal of Machine Learning Research 3: 115–143. http://www.jmlr.org/papers/volume3/gers02a/gers02a.pdf. 
  97. ^ Eck, Douglas; Schmidhuber, Jürgen (2002-08-28). Learning the Long-Term Structure of the Blues. Lecture Notes in Computer Science. 2415. Springer, Berlin, Heidelberg. 284–289. doi:10.1007/3-540-46084-5_47. ISBN 978-3540460848 
  98. ^ Schmidhuber, J.; Gers, F.; Eck, D.; Schmidhuber, J.; Gers, F. (2002). “Learning nonregular languages: A comparison of simple recurrent networks and LSTM”. Neural Computation 14 (9): 2039–2041. doi:10.1162/089976602320263980. PMID 12184841. 
  99. ^ Gers, F. A.; Schmidhuber, J. (2001). “LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages”. IEEE Transactions on Neural Networks 12 (6): 1333–1340. doi:10.1109/72.963769. PMID 18249962. ftp://ftp.idsia.ch/pub/juergen/L-IEEE.pdf. 
  100. ^ Perez-Ortiz, J. A.; Gers, F. A.; Eck, D.; Schmidhuber, J. (2003). “Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets”. Neural Networks 16 (2): 241–250. doi:10.1016/s0893-6080(02)00219-8. 
  101. ^ A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009.
  102. ^ Graves, Alex; Fernández, Santiago; Liwicki, Marcus; Bunke, Horst; Schmidhuber, Jürgen (2007). Unconstrained Online Handwriting Recognition with Recurrent Neural Networks. NIPS'07. USA: Curran Associates Inc.. 577–584. ISBN 9781605603520. http://dl.acm.org/citation.cfm?id=2981562.2981635 
  103. ^ M. Baccouche, F. Mamalet, C Wolf, C. Garcia, A. Baskurt. Sequential Deep Learning for Human Action Recognition. 2nd International Workshop on Human Behavior Understanding (HBU), A.A. Salah, B. Lepri ed. Amsterdam, Netherlands. pp. 29–39. Lecture Notes in Computer Science 7065. Springer. 2011
  104. ^ Hochreiter, S.; Heusel, M.; Obermayer, K. (2007). “Fast model-based protein homology detection without alignment”. Bioinformatics 23 (14): 1728–1736. doi:10.1093/bioinformatics/btm247. PMID 17488755. 
  105. ^ Thireou, T.; Reczko, M. (2007). “Bidirectional Long Short-Term Memory Networks for predicting the subcellular localization of eukaryotic proteins”. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB) 4 (3): 441–446. doi:10.1109/tcbb.2007.1015. PMID 17666763. 
  106. ^ Tax, N.; Verenich, I.; La Rosa, M.; Dumas, M. (2017). Predictive Business Process Monitoring with LSTM neural networks. Lecture Notes in Computer Science. 10253. 477–492. arXiv:1612.02130. doi:10.1007/978-3-319-59536-8_30. ISBN 978-3-319-59535-1 
  107. ^ Choi, E.; Bahadori, M.T.; Schuetz, E.; Stewart, W.; Sun, J. (2016). “Doctor AI: Predicting Clinical Events via Recurrent Neural Networks”. Proceedings of the 1st Machine Learning for Healthcare Conference: 301–318. http://proceedings.mlr.press/v56/Choi16.html. 


「回帰型ニューラルネットワーク」の続きの解説一覧


英和和英テキスト翻訳>> Weblio翻訳
英語⇒日本語日本語⇒英語
  

辞書ショートカット

すべての辞書の索引

「Recurrent neural network」の関連用語

Recurrent neural networkのお隣キーワード
検索ランキング

   

英語⇒日本語
日本語⇒英語
   



Recurrent neural networkのページの著作権
Weblio 辞書 情報提供元は 参加元一覧 にて確認できます。

   
デジタル大辞泉デジタル大辞泉
(C)Shogakukan Inc.
株式会社 小学館
ウィキペディアウィキペディア
All text is available under the terms of the GNU Free Documentation License.
この記事は、ウィキペディアの回帰型ニューラルネットワーク (改訂履歴)の記事を複製、再配布したものにあたり、GNU Free Documentation Licenseというライセンスの下で提供されています。 Weblio辞書に掲載されているウィキペディアの記事も、全てGNU Free Documentation Licenseの元に提供されております。

©2024 GRAS Group, Inc.RSS