Conditional Story Generation Part 1

Author(s: Francesco Fumagalli Natural Language Processor Conditional Story Generation Part 1: Helping story authors with text generation Photo by Min An, Pexels Natural Language Process (NLP), is an extremely complex field that includes two major branches: Natural Language Understanding and Natural Language Generation. When we talk about an English-speaking child learning English we would simply refer to them as reading and writing. This is a great time to be involved in NLP. The introduction of Transformer models 2017 dramatically improved performance, while the release of GPT-3 this year brought a lot of excitement. These will be discussed more later. It isn’t easy to write. This is a powerful and simple realization. Many authors get bogged down trying to figure out the perfect words. Or they just lose their way. Grammarly helps with the grammar. We want to support the creative process, provide inspiration from other stories, and help writers create beautiful stories. We owe each other the ability to tell stories. Neil Gaiman Before we could create something useful and effective, it was necessary to first research, learn, and comprehend what it means to write a story. This article is to share what I have learned, help others follow our path, and give a point of reference for anyone who wants to navigate the huge oceans that are Natural Language Generation (and Story Writing) without becoming lost. Our previous article focuses on helping children’s literature become more engaging by customizing illustrations. Natural Language Generation: From n-grams and Transformers. (Despite my best efforts to make this reader-friendly, the last section is quite technical.) What does it take to generate text from models? To train a model, it must first be taught. A corpus is a collection of many examples that the model can learn from. Common corpora that allow unsupervised training of models are BookCorpus, which is a collection of 11,038 books in different genres. Articles from Wikipedia and crawls on the internet such as CommonCrawl. Even with huge computational power, it can take days, sometimes even weeks to discover the relationships between words and the statistical distributions of probabilities that connect words. This gives words meaning as a whole. The “reasoning,” or the reasoning behind the algorithm, will depend on what text was provided. It is important that you carefully choose the text, since the model which has been trained on different text corpora will produce very different results. Let’s not forget about how it got here. Template from Venngage Statistical Models The simplest approach to NLP is bag-of words. This uses the number of occurrences for text classification, and other tasks. The number of words’ appearances does not include other information, such as where each word is located in a sentence or its relationship to other words. It is therefore not suitable for text generation. This problem is solved by N-gram models. Instead of counting words, we count N consecutive words. After N-grams are counted, it is easy to generate text. Given the N-1 most recent words, which one will be the next? You can do this iteratively, until you have a complete sentence, paragraph or book. A high N number results in a better model but also a greater chance of someone copying what you have written in other texts. Sentence: Hello Medium Readers. n-grams Example: 1.grams (Hello), Medium), (readers). 2.grams (Hello, Medium),(Medium, readers) The Greedy method (always choose the most probable one), or Sampling (the word is chosen among the most probable ones according to the probability distribution). One of these methods can be used iteratively to generate sentences, however, it is not always the most efficient. Although it would be ideal to try all possible combinations of words, and then find the best, this method is not time efficient. Beam Search is a good alternative. It allows multiple paths to be explored at once (the user has the option of choosing the number), and the best one gets selected. Local minima are avoided. This approach is particularly helpful for human writers, as it presents different options to them. Deep Learning models: RNNs Deep Learning has enabled the use of Neural Network-based Neural Network models that can learn more complicated dependencies between words than just n-grams. In the 1980s “simple” Recurrent Neural Networks were created to “remember” the dependencies between words and take on text. It would be another year before they could outperform statistical techniques. There are several drawbacks to these models. The fundamental for Deep Learning, the gradients, vanishes or explodes on lengthy sentences and documents. This makes the training extremely unstable. This issue was addressed by Long Short Term Memory and Gated Recurrent units. These memory gates, which are used in 1997, specific types of RNNs, prevent the disappearing gradient problem. However, they don’t work well for transfer learning. (Copying a part of a model that someone has trained for another task and saving hours of training), and require a specifically labeled dataset. LSTMs are the basis of the Encoder/Decoder architecture that was originally used for Machine Translation. This allows you to take in a complete sentence, “encode” it, and then “decode” the output. The result is more consistent and better-received. The attention mechanisms gave models the ability to identify which sentences are relevant or dependent on each other. Transformers Deep Learning model: Transformers Based on the Encoder/Decoder architecture, attention, and multi-head attention (often running through attention multiple times in parallel), Transformers was created in 2017:. This model didn’t use RNNs recurrent sequential processing, but instead used a positional Encoding to “reason about each word” and calculate relevance scores. Attention lends itself to parallelization, which allowed models to be trained with many billions of parameters, including MegatronLM, T-NLG and GPT-3 You can read a more detailed explanation about Transformers here. Transformers have two key benefits: they are easier to train, and it is more fun.


We monitors and writes about new technologies in areas such as technology, innovation, digitization, space, Earth, IT and AI.

Related Posts

Leave a Reply