«Creativity Support for Computational Literature By Daniel C. Howe A dissertation submitted in partial fulfillment of the requirements for the degree ...»
While we have thus far highlighted both tools and context, it should be noted that their relationship is as important as are either of their properties in isolation. In the case of PDAL/RiTa, the degree to which tools and learning are tightly coupled has proven to be beneficial and allowed both to develop in a mutually informing fashion. Such a coupling implies a degree of communication, if not close collaboration, between those creating the tools and those developing the accompanying intellectual program. In this regard we were in the fortunate (and perhaps rare) position of having a great deal of control over the ongoing development of both the tools in question and the accompanying pedagogical material (readings, assignments, discussions, critiques, etc.). In several cases, specific materials were chosen to reflect important aspects of the technology being used.
3.5.1 Supporting Serendipity
For an extended discussion of the notion of authorship within digital literary art, see [Howe and Soderman 2009].
Custom tool integration in a specific pedagogical context also presents its own set of difficulties. The task of creating an instructional tool that specifically addresses any context (art-making in our case) complicates the design process by introducing a set of unique constraints which must be iteratively refined and tested.
See [Bird and Loper 2002].
Most of my advances were by mistake. You uncover what is when you get rid of what isn’t. (Buckminster Fuller) Often, as suggested by quotations above, an important element of the artistic process is unexpected or serendipitous outcomes that can take the programmer/artist in newly productive directions. Facilitating such outcomes is thus an important (and often overlooked) element of designing for the artistic context, whether in an academic or professional environments. Computer scientists tend to conceptualize the coding process as the (often difficult) process of correctly implementing a pre-existing idea in a concrete/formal language.
For the traditional computer scientist, the targeted behavior of a program is generally known before coding begins. Thus, we see the emergence of methodologies like “test-drivendevelopment”, in which the tests of a program’s correctness are written before the program itself. While such an approach is useful and often appropriate in many contexts, it is, at least in some cases, the opposite of what is required for the arts. Artists often learn what it is they are making as they make it. As one student commented, I need to first articulate what I want to say, through my program, plan it out and answer the entire how, what, when and where questions of my project.
At this stage nothing can be left to chance. Then comes the execution and implementation. And at this point even though we are ‘talking’ to a computer that only does what we tell it to do, we find that in the gap between what we want it to do and what we actually tell it to do (also what the computer does), there lies the act of the unexpected outcome so valued in the intuitive process of art making.
Mistakes and accidents occur in artistic practice, as in more structured programming, but here they can often be highly productive, and thus warrant special attention from tool designers. In order to facilitate serendipity in its various manifestations (surprises, mistakes, chance), several specific features were included in the RiTa design: default non-determinism, soft-failure, and runtime exceptions, each discussed below.
In general practice, one wants a program to produce the same output each time it is run. While some algorithms may specifically use randomness (e.g., quicksort), this generally does not affect the output of the program. A correctly implemented randomized quicksort, for example, will return the same ordering each time it is run. Its randomness is intended only to optimize its performance. In fact, a whole class of algorithms, generally referred to as “Monte Carlo” methods60, after the casino in Monaco, exploit random operations to achieve deterministic outcomes61.
To the contrary, non-deterministic outcomes are often an important part of programs written by artists. This is especially true in interactive and/or generative work, in which it is the very fact that each piece starts off at a different “place” that is often part of the appeal.
More important however is the fact that non-determinism allows the programmer herself to traverse the “probability space” of a program during development, which can provide key indicators as to whether the piece will prove interesting to an audience. To support this possibility, iterators in RiTa are non-deterministic by default. If one asks for the set of synonyms for a word, for example, unless specified to the contrary, they will be returned each time in a different order. Further, since a range of RiTa methods allow the programmer to specify an optimal number of results (after which the method returns), these methods actually Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used when simulating physical and mathematical systems. Because of their reliance on repeated computation and random or pseudo-random numbers, Monte Carlo methods are most suited to calculation by a computer. Monte Carlo methods tend to be used when it is unfeasible or impossible to compute an exact result with a deterministic algorithm [Metropolis and Ulam, 1949].
The name "Monte Carlo" was popularized by physics researchers Stanislaw Ulam, Enrico Fermi, John von Neumann, and Nicholas Metropolis, among others; the name is a reference to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money to gamble.
return different results each time. For example, if one is querying for rhymes in the lexicon, it is often useful to specify the minimum and/or maximum number of results desired; both to reduce processing time (when a great number of rhymes exist) and/or to allow the rhyme algorithm to relax its constraints (when too few are found)62.
Additionally, to support discovery via unexpected results from programming errors, the RiTa toolkit has evolved toward what we call a “soft-failure” model This means that the library, wherever possible, attempts to present some result to the user, even in the case of “predictable” programming errors, such as when a resource does not exist at a specified location, or the pre/post constraints for a method are not met. Instead of quickly exiting, the library will attempt to generate some default behavior in such cases, while simultaneously printing an error report to the console to explain what is often unexpected behavior. Further, the library generates only “runtime” exceptions, which means that none of the user’s code needs to be wrapped in try/catch methods. This aligned nicely with both the Processing and Eclipse environments, which eliminate the distinction between the compile and execution steps. Processing presents only a “play” button, which first compiles, then runs the current program, while Eclipse performs continuous compilation, highlighting errors as they are typed.
For me, as a fairly experienced programmer... the coding I have previously done has had very strict rules about what I had to achieve, and the process was simply working on the code until it reached the constraints that had been set for me by someone else. The process of coding by happy accident, then, was something that I had never before experienced. The first or second week It is also essential of course that such functionality can be easily disabled during debugging so that problematic behavior can be reproduced as necessary until it can be repaired.
of class, the piece we read about allowing yourself to make "good" mistakes really proved true in my experience. I found that trying something, and seeing what sort of effect it produces when I ran it, had a very strong influence on the ideas behind my projects. Each experiment communicated a different idea, and while I didn't always end up communicating what I had originally intended, I always found something interesting to say through the process of writing the code 3.5.2 Supporting Artistic ‘Misuse’ Throughout the late twentieth century and into the twenty- first, it became almost common to see performances that used some element of the manipulation, breaking, or destruction of sound mediation technologies… [Kelly 2009] Somewhat related to the support of serendipity via coding errors is the consideration of intentional “misuse” of tools for artistic affect. Instances of this tendency emerged as early as the first semester of teaching with RiTa. Later we recognized these instances as part of a larger pattern, hinting at an artistic strategy that we might want to facilitate (though at the time it was less than clear how to do so). One student, in a warm-up exercise in the first weeks of the semester, discovered a bug in the text-rendering module which caused text objects (RiTexts) to leave traces on the screen when their contents were swapped at high rates. Rather than report the bug and ask for a fix, as students generally did, this student built a unique project around the behavior. By beginning with a high rate of text change, then placing control of this parameter in the hands of the user (via mouse movement), the user could build up “traces” of previous renderings. If performed “correctly” by the user, a “hidden” message was revealed in the textual sediment. Another student, exploiting a threading issue that occurred when large numbers of RiSpeech objects were created in the same applet, was able to create a unique aural experience as diphones from a single word or phrase were pronounced in sequence by different voices with different timbres. In both cases, interesting interactive works were built around specific unintended behaviors in the toolkit, behaviors that generally occurred with very low frequency.
In his recent book on contemporary audio art, “Cracked Media”, Caleb Kelly 
discusses this phenomenon in some depth:
the inquisitive artist, on finding a technology that is new to him or her—be it a newly developed tool just released into the market or an outmoded technology found in a dusty corner of the studio—sets out to see how it works and discover the boundaries and limitations of the device. What can this tool do, and how can I use it in a way that may not have been originally intended? This might be achieved by simple manipulation or modification (taking the technology apart and trying to put it back together), or it might be through overloading it or otherwise stretching its operating parameters, until it starts to fall apart or break down... There is nothing new in this idea: the painter who uses the brush handle on the canvas and the guitarist who plucks the strings around the head of the electric guitar are both engaged in a similar area of practice. Experimentation with readily available tools and resources is central to contemporary artistic practice… Of course such techniques are not limited to the realm of audio. Dating back to the midtwentieth century and beyond, digital artists of all genres have manipulated, cracked, and broken media technologies to produce novel artistic experiences.
The question of how to support this artistic technique in the context of software, however, does not have a simple answer. One very basic (but rarely employed) technique is to provide continual public access to all versions of a software tool63, even those with already-identified issues. This ensures that projects leveraging such “cracks,” to use Kelly’s terminology , can be further developed, even after the relevant defect has been fixed (as was the case in the RiTa examples mentioned above). Another, perhaps more productive This has become a somewhat common practice in open-source projects, e.g., those hosted on Sourceforge. See http://sourceforge.net/.
strategy, is to attempt to code the software tool in such a way that it continues to function, even if in an unexpected way, in unexpected use-cases. If we model the possible execution paths of a program as a finite state machine, this entails special attention, and programming resources, to so-called fail-states. Rather than simply exiting with an error message, one must notify the user of the unexpected state, while still enabling the program to continue execution, often with some best guess as to reasonable default behavior. In the text-to-speech case above, it might appear acceptable to code the program in such a way that it simply exits (with a descriptive message) after some maximum number of speech objects are created.
Unfortunately, this reasonable strategy would both have prevented the interesting outcome described above and prevented the discovery of the bug, which more than likely had other ramifications.
Similarly, considerations of misuse compelled us to permit a very wide range of parameter values for all functions in RiTa, enabling users to experiment with an object by “overloading it or otherwise stretching its operating parameters” [Kelly 2009]. This means that for most applications, the edges of the range (say for text-to-speech voice parameters), are not accessed in typical use-cases. But for those interested in putting the system to unexpected uses, the fact that the larger parameter range is accessible can be quite productive.
John Ippolito , one of the earliest theorists to have acknowledged this type of artistic