Here, under NLP categories, the historical flow of the overall model / process will be presented.

The timeline is as follows:

  1. Recurrent Neural Network (RNN) - Rumelhart, 1986,
  2. Long Short-Term Memory (LSTM) - Hochreiter, Schmidhuber - 1997,
  3. Sequence to Sequence (Seq2Seq)</u> - Sutskever, Vinyals, Le - 2014,
  4. Attention pt. 1 & Attention pt. 2 - Google - 2017
  5. GPT 1, 2, 3 - OpenAI - 2018 ~ 2020,
  6. Bidirectional Encoder Representations from Transformers - Google - 2018.

Plus, some interesting models are to be reviewed and explained in simpler ways.

태그:

카테고리:

업데이트: