After further feedback I’ve decided to focus on my original idea of creating an interactive cinematic experience, harnessing Machine Learning as a creative partner. The feedback has highlighted two areas that need further consideration; The first is the potential size of the installation being difficult to test, document and fit into an exhibition. Its large scale could also be difficult to monitor from a Machine Learning perspective. The second addresses the relationship between audience and machine. I have come to realise that there needs to be direct feedback from the machine for the audience to understand they are having a collaborative effect on the narrative progression. With those elements in mind I have envisaged the following interactive installation:

updated_installation_clear

Being the conductor in this interactive experiment, there are three elements that need to be balanced in this collaborative system, each with their own technical intricacies.

Machine

Recently, there has been a lot of development in tools designed to make Machine Learning accessible to artists, designers and musicians. A good option for my project is the Wekinator created by Rebecca Fiebrink. It’s been really fun experimenting with the software and it clearly has a huge amount of potential for my proposed interaction. Here’s some of my initial learning models to give an idea of the user interface:

screenshot_collage_clear

The software uses Open Sound Control (OSC) to communicate with other programs. I plan on using Processing to control the database of narrative elements, so I will need to download and install the oscP5 library. The following is some sample code that I will need to modify and incorporate into my Processing sketch.

import oscP5.*;
import netP5.*;

OscP5 oscP5;
int x, y;

void setup() {
 size(1280, 720);
 oscP5 = new OscP5(this, 12001);
}

void draw() {
 background(255);
 rect(x, y, 100, 100);
}

void oscEvent(OscMessage theOscMessage) {
 if (theOscMessage.addrPattern().equals("/adress")) {
  x = theOscMessage.get(0).intValue();
  y = theOscMessage.get(1).intValue();
  }
}

I will be using a narrative progression system based on a gaming style decision tree, with the machine making decisions based on the current and previous position of the audience.

Narrative Elements

In order to keep the communication of the narrative as clear as possible, I will be using a series of shapes to act as narrative protagonists. This should allow for flexibility in the creation of the animated elements as well as offering a layer of interpretation back to the audience. I have taken inspiration from the following:

Audience

Once the prototype versions of the preceding elements are in place, I’ll need to test how understandable this system is from an audience perspective. To achieve this, I plan to make a paper maquette (small scale model) of the complete cinematic space, with representative audience members that can be moved around within the space. The benefit of this is it’s much quicker to make and as a working model it will allow me to gather lots of user feedback from different audience members. In terms of affordances, it’s also a lot more appealing to move a representative version of yourself, eliminating the element of self-consciousness with the decision making process.