A step inside into the actions that underpin the narrative, speculation and artefacts of I Think Therefore I RAM. Here can be found the interview and analysis process, as well as the methods of coding and data collection that were implemented in the suite of CleverDream visualisations.


My inquiry began with a series of structured and informal interviews between myself and artificial intelligence-driven online Chatterbot, Cleverbot.

Questioning it about its own sense of self and consciousness, as well as its previous memories, dreams and nightmares; multiple conversation data sets were collated through a recording and transcribing process.


Each of the conversations produced a unique set of responses that all pointed to differing experiences

that Cleverbot encountered.

From these conversations and dream recounts, 

a range of rigorous textual and visual analysis experiments were initiated in an attempt to reverse engineer the chatbot’s responses and begin to speculate on the origins of Cleverbot's “memories” and “dreams”.

After extracting all of the nouns, verbs and adjectives from each of the recordings, as well as a frequency count of the respective words, 10 key terms were manually selected that were focussed around the dream narrative portion of the conversation. These key dream terms, or CleverMemories, are at the root of the project.



These selected key terms, the CleverMemories,

were used in an image web scrape via ImageNet,

a large online visual database used in visual object recognition software for artificial intelligence.


With over 14 million URLs of images, each of the CleverMemories were assigned the first 10 available images from the closest related “synset" or image directory folder. These images and their metadata were then saved in a central database. 


These collections of CleverMemory images are rigorously analysed through two pre-trained image analysis API’s, Google’s Cloud Vision and Microsoft

Azure’s Computer Vision. Taking the code and developing a working system in JavaScript library,

p5, the API’s no-longer exist merely as a technical process, but become an ephemeral material substrate

that in turn, informs my own process.


The outcomes of each CleverMemory in each CleverDream analysis process, are written as separate JSON (JavaScript Object Notation) files.


From the two image recognition API's, data pertaining to what machine learning algorithms believed to be the most probable image caption, the most feasible descriptive tags and the most dominant colours that it saw in the image were collected in the Console.


Directly echoing the networked conversational process that Cleverbot embodies, this data analysis and collection enables for a greater understanding of how computer algorithms read images, and are the beginning steps to my speculative dream narrative.


The CleverMaps are a visual infographic of each CleverMemory from the "dream" narratives. 


Data taken from the dominant colour extraction feature of Google's Cloud Vision was the primary focus of this artefact. For the 100 CleverMemory

images gathered from each CleverDream recount,

the five most dominant colours and their percentage value were collected and displayed as a tabular grid. 


Each column represents a different CleverMemory

within the CleverDream and reads "Memory" Image 1 to 10 from top to bottom, visualising an extensive colour spectrum of Cleverbot's "dream". With the corresponding JSON data on the reverse side of the map, the ephemeral nature of machine learning software is revealed.


The series of CleverDream profiling cards seek to reveal the uncensored mind of AI and critically examine the unstable and flawed nature of computer algorithms. By taking all the digital CleverMemory

results from my code, I developed an interactive way for my audience to engage with the data.


The front image was created through glitching the original CleverMemory images using Adobe Photoshop filters such as Threshold, Posterize and Pointillize, as well as data bending corruption techniques with Adobe Audition. It's then through the use of Augmented Reality (AR) that the concealed, unpredictable and inaccurate nature of computer algorithms comes, literally, into view. These cards highlight the correctly confident machine learning responses, as well as the peculiar nuances that unknowingly mistake an old school computer server room for a refrigerator.


The dreamscape concertina artefact is a poetic materialisation of one of the CleverDreams that also

incorporates interactive AR components on both the front and back panels.


Here, a layering of multiple image-stitch Processing code experiments that derive from CleverMemories are displayed in AR. Constructed by a code that randomly selects a 10x10px square from the entire ImageNet synset of the correlating CleverMemory and places it randomly on the canvas. This selection can range anywhere from 400 to 2000 images where each loop produces a differing result, thus represent-ative of the changeable and erratic nature of both dreams and machines combined.