«Creativity Support for Computational Literature By Daniel C. Howe A dissertation submitted in partial fulfillment of the requirements for the degree ...»
This presents an interesting tradeoff between the two paradigms. With statistical models, relatively little prior knowledge of the genre is required in the creation of a model, while a “deep” understanding of the structure of the texts in question is generally needed to construct a convincing grammar for it. Statistical models tend to work best with large amounts of textual input, as the statistical properties grow stronger (to a point) with a larger input set. Grammars, on the other hand can be represented compactly and require no external input set for operation, a fact that recommends them for web-based projects, at least prior to the addition of the RiTaServer module. Further, the two approaches differ in the fact that after creating a “successful” grammar, according to whatever definition one might use, one is often then in a position to generalize about the genre that is being examined. If the generated outputs are representative of the genre, then the grammar has generally captured some, if not all, of the relevant rules on which that genre operates. On the other hand, the success of a statistical model tends to tell us more about the particular statistical approach (and possibly the chosen input texts) than anything about the genre in question.
From a pedagogical perspective this is an important distinction. While grammars may require more knowledge and more effort to create, they tend to result in more post-process knowledge about the type of language under examination. Statistical models, on the other hand, require less overhead, tend to generate more “surprising” results, and teach students more about the “process” (statistical analysis) than the “product” text, which may or may not represent some existing genre. In the context of PDAL and RiTa, both approaches are important, and further, they shed light upon one another, especially when presented back-toback, as was generally the case in the class. Lastly, they served as a conceptual link into a deeper understanding of the history of artificial intelligence, illustrating the historical distinction between “neat” and “scruffy” approaches [Wardrip-Fruin 2006] that has long been a subject of debate in the field. Of course the ideal approach is often, depending on the project, a mix of the neat and the scruffy. This avenue is made easily available through the RiTa tools as when, for example, grammars make calls to statistical models to generate some text item that obeys both the grammatical specification and the statistical distributions.
While PDAL students were required to create a project employing some type of probabilistic language model, it was left to each student to decide what type of model to use.
The RiTa toolkit provides a range of objects that leverage probabilistic techniques, including n-gram-based generators (RiMarkov), Keyword-In-Context models (RiKWICker), and “maximum entropy” parsers and taggers (RiPosTagger, RiChunker and RiParser.) However, as n-grams featured prominently in one of the course texts, Charles Hartman’s Virtual Muse, they (with support via the RiMarkov object) were chosen by a majority of students for their projects. As was customary, the topic was presented with a range of reading, coding, writing and critiquing exercises. The primary readings for this section were Hartman’s Virtual Muse and Eric Elshtain’s writings on the Gnoetry engine [Elshtain 2006]. One of the unique contributions of the former text is its rigorous discussion of n -gram-based generation in a literary context, which Hartman used extensively in his work Monologues of Soul and Body.
Elshtain’s text presents an interesting set of extensions to the basic n-gram technique.
In addition to this reading, several artworks employing n-grams were presented and critiqued, including ‘Talking Cure’ [Castigilia et al. 2002], and several of the poetic texts generated using Elshtain’s Gnoetry70 engine. The introduction began with a very simple sketch (see Appendix: Examples) that generated new texts from a combined set of Wittgenstein and Kafka pieces, allowing users to interactively experiment with different n values and immediately see the affect on the output. In response to questions concerning the working of the program, a very basic introduction/review of probability was presented and For more information, visit the Gnoetry Engine at http://www.beardofbees.com/gnoetry.html.
students directed to create a similar “mash-up” of their own to be performed in the subsequent class. In the following class, students presented their programs to the class for critique. A discussion ensued about limitations of the approach and additional features of the RiTa tools were presented for those who had not discovered them on their own, either via the examples or documentation. These included weighting of inputs, constraints on repetition, custom tokenization, feature compression (case, synonyms, etc.), and the literary extension methods discussed in the technical section, (e.g., getCompletions(), getProbabilities(), and getProbabilityMap()), which allow for some degree of interactive control of the model during generation. In addition, several hybrid approaches were presented, including the use of RiMarkov on other features (provided by the RiAnalyzer) such as Part-of-Speech. Another approach was the combined use of a grammar for higher-level structures (e.g., section, paragraph or even sentence) with the use of the statistical models for lower-level tasks, such as word-selection, and semantic consistency. Several interesting (and publishable) projects resulted from this set of work, including a full-scale dramatic play generator complete with lighting and stage directions, etc. (see Appendix: Student Project Gallery).
Concurrently, students were given a coding assignment to build a letter-level concordance for an input text (presented alternatively as a unigram model, or a n-gram model with n=1). This assignment led naturally into a first lesson on data structures, as students quickly realized that to store the information required, some type of dictionary-like structure would be required. In this dictionary, given some “key” (often a single letter), one could obtain the number of times it appeared in the input, without scanning the input each time. The pros and cons of various approaches were discussed, from Lists, to Hashtables, to Arrays indexed on character code, in terms of both efficiency and storage space. By the end of the session, most students seemed comfortable with the assignment itself, and, more importantly, with evaluating (at least at a very basic level) the different data representation alternatives, a central topic in many introductory computer science courses. Further, this discussion was motivated directly by the materials and problems of the given context (creative text generation) rather that by an abstract problem designed to match the topic to be taught.
Lastly, a maximum entropy approach, as represented in the tagging, chunking, and parsing components of RiTa was presented. Although not all students possessed the required mathematical experience for full comprehension, it was a clear “next step” after the previous assignment, especially in the case of those students intending to continue on to further computer science courses. As was generally the case with RiTa tools, there were (at least) two levels of understanding which enabled use of the tools: a base level concerning what the various methods could do, and a deeper understanding of the inner workings of the components (always available for inspection). While the latter allowed users to take full advantage of the functionalities and extensions in the RiTa objects, it was not required for those wishing to make only simple use of the tools. One of the part-of-speech taggers included with RiTa used a maximum entropy approach71 and was presented as an example for this section. Not only was part-of-speech tagging an easily understood example, it led (as soon as substitutions were attempted) directly into chunking, where students wished to replace noun-phrases rather than simple nouns. This led into a discussion of parse-trees and strategies for parsing (bottom-up, top-down, chart-strategies, etc.) and additionally made clear the presence of recursive syntactic structures, e.g. noun-phrases containing other nounphrases, and the need for (in some cases) a full-fledged parser (RiParser) rather than a simple chunker (RiChunker). The presence of such structures initiated a discussion of recursion The other was a faster, but less accurate, transformation-based tagger following Brill .
itself, and a few simple recursive algorithms were presented in combination with a more general presentation of the kind of problem for which a recursive solution is recommended, e.g., one containing sub-problems with a similar structure to the initial problem. Rather than the typical Fibonacci or Factorization examples, recursively structured English sentences were presented as examples.
220.127.116.11 Integrating Computational Thinking In the assignments above we can see how, in addition to the “artistic” elements required to make a compelling work of digital art, an impressive range of core computational ideas arise naturally as a result of the material at hand. Through just the two relatively simple examples presented above, grammars and n-grams, the student will have been introduced to an impressive number of key computer science concepts, many of which likely to be taught in an introductory CS sequence; from finite-state automata to context-free grammars and the language hierarchy; from elementary data structures to hashtables, to the construction of parse trees; from regular expressions to recursion. Rather than appearing to students as arbitrary additions to the “real” topic at hand, the relevance of these ideas is immediately apparent in a course that focuses on creative language-driven programming projects.
3.6.2 Final Student Projects in PDAL Typically, final projects involved both novel combinations of existing RiTa components and the creation of custom code to extend or augment existing functionality. In several cases, such extensions have been added to the core RiTa library, with authors receiving credit on the RiTa website. Because RiTa provides a core subset of the potentially daunting infrastructure generally required for language-based artworks, students are comparatively free to explore a variety of topics through both individual and collaborative projects and encouraged to focus on those aspects of their work they find most engaging.72 Further, the open and community-oriented nature of the programming environment provided students with a sense that their projects were meaningful contributions, both to other RiTa users and to the larger digital art community, as opposed to just “exercises”. Several students in the courses expressed interest in incorporating elements of their projects back into the library, while others exhibited and published their projects in well-respected galleries and journals for digital literature. Still others expressed interest in creating their own libraries to support creativity for specific domains. RiTa itself (the code for which was often discussed in class) provided a helpful example in these cases of what such a library might look like, with well thought-out interfaces, clean code structure, and thorough documentation.
In addition to source code and functioning programs, careful documentation of all aspects of students’ process was stressed, both as a method to evaluate their development in a critical/reflective manner, and to provide examples and resources for others in the RiTa/Processing community. Finally, in the last meeting of each semester, students presented their work in a live setting to a larger audience of practicing artists, researchers and educators.
Both the breadth and depth of these projects has been astonishing and is discussed further in Chapter 5 (Evaluation) as one measure of the tools ability to support a wide range of creative work. For those interested, several dozen of these projects are available in the project gallery located on the RiTa website at http://www.rednoise.org/rita/rita_gallery.htm.
See the Chapter 5: Evaluation, for a further discussion of this claim.
4.1 Introduction This chapter presents a summary of prior work that has influenced the theorization, design, implementation, and deployment of the RiTa toolkit. While the range of this work is broad, this is due to the fact that little, if any, existing research has targeted our specific goals With this in mind, we focus here on related research and practice whose goals overlap with at least one of the explicit goals of this project, as laid out in the introduction. While the brief discussion of prior work in the opening chapter present the current state of creativity support for the literary arts, this work in this section represents more direct influences on our
research, and falls into the following primary categories:
Programmatic Educational Environments • o for Natural-Language Processing (NLTK, SimpleNLG, etc.) o for Procedural Literacy and Interactive Art (Processing, Max/MSP, etc.) Computer Science and Literary Art (Strachey, Shannon, Weizenbaum, Bringsjord) • Computationally-augmented Literary Experiments: Tools and Practice • For reasons of economy, several areas of active research relating only tangentially to RiTa, specifically tools for interactive fiction (e.g. Inform or Curveship), games with narrative and/or conversational elements (e.g. Facade), non-programmatic support tools (e.g. scriptwriting aids like Dramatica, argumentative-writing aids like Euclid, and collaborative writing tools like EtherPad), are not addressed here, though pointers to resources on these topics have been included where applicable.
4.2 Programmatic Educational Environments Computer Science (CS) researchers have created a wide range of programmatic libraries that attempt to aggregate the range of tasks required to perform high-level natural language research. Recent years have also seen some interest in adapting this approach to the classroom, by providing tools that specifically address pedagogical issues that arise as new computer science students attempt to work with natural language. Similarly there has been impressive growth in both the number and quality of libraries and environments designed specifically for computational artists. As the RiTa toolkit bridges these two research areas, this section presents a review of important works in each that have informed our approach.