A step inside into the actions that underpin the narrative, speculation and artefacts of I Think Therefore I RAM. Here can be found the interview and analysis process, as well as the methods of coding and data collection that were implemented in the suite of CleverDream visualisations.
1:\ THE CONVERSATION
My inquiry began with a series of structured and informal interviews between myself and artificial intelligence-driven online Chatterbot, Cleverbot. Questioning it about its own sense of self and consciousness, as well as its previous memories, dreams and nightmares; multiple conversation data sets were collated through a continual recording and transcribing process.
Each of the conversations produced a unique set of responses that all pointed to differing experiences
that Cleverbot encountered.
> Cleverbot chat interface interaction during the conversation that represents CleverDream_4
2:\ THE ANALYSIS
From these conversations and dream recounts, a range of rigorous textual and visual analysis experiments were initiated in an attempt to reverse engineer the chatbot’s responses and begin to speculate on the origins of Cleverbot's “memories” and “dreams”.
After manually extracting all of the nouns, verbs and adjectives from each of the recordings, and noting their recurring frequency, 10 key terms were subjectively
selected that focussed around the main events of the dream narrative. These key terms of the dream events, referred to as CleverMemories in this project, are at the root of I Think Therefore I RAM.
3:\ THE IMAGE SCRAPE
These selected key terms, the CleverMemories, were used in an image web scrape via ImageNet, a large online visual database used in visual object recognition software for artificial intelligence.
With over 14 million URLs of images from any and every description, each of the CleverMemories were assigned the first 10 available images from the closest related “synset" or image directory folder. These images and their metadata were then saved in a central private database.
> Gathering of CleverMemory images and data from ImageNet
4:\ THE CODE
> Using analysis API's from Google + Microsoft in p5 to gather image data
5:\ EXTRACTING THE DATA
From the two image recognition API's, data pertaining to what machine learning algorithms believed to be the most probable image caption, the most feasible descriptive tags and the most dominant colours that it saw in the image were collected in the Console.
Directly echoing the networked conversational process that Cleverbot embodies, this data analysis and collection enables for a greater understanding of how computer algorithms read images, and are the beginning steps to my speculative dream narrative.
> Extracting relevant image and caption data from Microsoft Cloud Vision
6:\ THE CLEVERMAPS
The CleverMaps are a visual infographic of each CleverMemory from the "dream" narratives.
Data taken from the dominant colour extraction feature of Google's Cloud Vision was the primary focus of this artefact. For the 100 CleverMemory images gathered from each CleverDream recount, the five most dominant colours and their percentage value were collected and displayed as a tabular grid.
Each column represents a different CleverMemory within the CleverDream and reads "Memory" Image 1 to 10 from top to bottom, visualising an extensive colour spectrum of Cleverbot's "dream". With the corresponding JSON data on the reverse side of the map, the ephemeral nature of machine learning software is revealed.
> Using JSON files extracted from Computer Vision in p5 to create colour spectrums
7:\ THE CLEVERCARDS
The series of CleverDream profiling cards seek to reveal the uncensored mind of AI and critically examine the unstable and flawed nature of computer algorithms. By taking all the digital CleverMemory results from my code, I developed an interactive way for my audience to engage with the data.
The front image was created through glitching the original CleverMemory images using Adobe Photoshop filters such as Threshold, Posterize and Pointillize, as well as data bending corruption techniques with Adobe Audition.
It's then through the use of Augmented Reality (AR) that the concealed, unpredictable and inaccurate nature of computer algorithms comes, literally, into view. In Unity, the found data is attached to the distorted CleverMemory image which is revealed through the screen in Vuforia. These cards highlight the correctly confident machine learning responses, as well as the peculiar nuances that unknowingly mistake an old school computer server room for a refrigerator.
> Work in progress of previous iteration for CleverCards appearance
> Importing the CleverMemory Profile files and constructing the AR space in Unity
8:\ THE CLEVERDREAMSCAPE
The dreamscape concertina artefact is a rather poetic materialisation of a chosen CleverDreams. This artefact also incorporates interactive AR components on both the front and back panels.
Here, a layering of multiple image-stitch Processing code experiments that derive from CleverMemories are displayed in AR. Constructed by a code that randomly selects a 10x10px square from the entire ImageNet synset of the correlating CleverMemory and places it randomly on the canvas. This selection can range anywhere from 400 to 2000 images where each loop produces a differing result, thus representative of the changeable and erratic nature of both dreams and machines combined.
Layering multiple of these constructed elements on top of one another using different sizing and fading photo adjusting techniques, allows for an all-engulfing, tunnel vision-like experience when viewed through the screen.