Assignment: Make a controller for a DMX-based lighting plot.
For the assignment I tried using both node.js and Arduino, and found the Arduino (Arduino MRK1010 board and Arduino MKR485 shield) to be much easier in terms of programming.
To start off I connected the DMX cable to the shield.
I then set the channel on the DMX and tested the light using the ArduinoDMX library’s example and tested a few effects to see the light’s speed – I was surprised to see that it was very fast to turn on/off, much like a strobe effect.
Having worked a lot with audio and visuals on Max/MSP, I wanted to try using the program to control the light. The three main effects I wanted to explore where panning, rhythm and synchronisation/desynchronisation nature of audio and visual.
I started off by trying to send multiple data from Max to Arduino through serial communication (panning experiment).
Sending data from Max: convert all integers to ascii (using atoi object)
Receiving data from Arduino: convert characters to strings (using subString) to integers (using toInt) — StringSubstring and StringToInt examples on Arduino.
After I was able to run the serial communication smoothly I started building the sound files and the controls. These were the results.
Max/MSP patch and Arduino code: https://github.com/hellonun/dmx-control-maxmsp-arduino
I am very happy with both the control functions and the light quality. I hope to experiment with other sounds and visual effects in the future (as well as making the control more user-friendly)
Assignment: Use predictive models to generate text: either a Markov chain or an RNN, or both. How does your choice of source text affect the output? Try combining predictive text with other methods we’ve used for analyzing and generating text: use RNN-generated text to fill Tracery templates, or train a Markov model on the output of parsing parts of speech from a text, or some other combination. What works and what doesn’t? How does RNN-generated text “feel” different from Markov-generated text? How does the length of the n-gram and the unit of the n-gram affect the quality of the output?
Original text reference: https://www.nuntinee.com
Code: to be uploaded
'Intervention to its sequin surfactant towards people in these nuances one places once and 16 vibration be',
'Intervention (active installation to testing and music. Track the fundamentals of white light. The audio ',
'Interactions and relation from Google Location of collaborative materials; clear. This project uses one p',
'Interventions: Pet, energetically in reality by the the bigger picture of our minds try to engage with a ',
'Interactive project between scents of light is asked together). The abstract morphing and plays with dist',
'Intervention to them. By tuning to testing an associated using the installations creative installation vi',
'Interactive means towards these are created using p5.js for determining to their motion, time and senses ',
'Interacts with them. However, we never real time. The installation visuals Process and or uses taken a pr',
'Interventions; how scents an arduino. Code: https://github.com/hellonun/seeing Senses agree, our minds ge',
'Interactive community Day, New York 2019 ITP Wintervention to test human reaction patterns and sense of f']
Light installation exploring the effects of white light through motion, time and refractive materials; clear cubes and prisms. 
ABOUT THE PROJECT
Even though we constantly experience reflection refraction and diffraction, they go unnoticed unless we pay close attention to them. The moments we do usually slow us down and make us more present: watching a sunset or noticing shadow patterns in a pool.
Little Sun generates light patterns and colors using a high-voltage LED light, three acrylic cubes and a prism cube (a cube made of four prisms joined together). The white light is shined through the rotating cubes creating a slow motion light dance that draws us to pay close attention to its changing forms.
The rotation is controlled using a stepper motor and an arduino.
From the beginning of the class I’ve been interested in exploring ‘Sensorial Perception’ as it is something that is accessible to all / something we cannot avoid. I wanted to make people question the way their minds construct their physical reality and play with that construction – creating something which distort that reality.
How? So how do we construct our reality? We take in information from various senses and synthesise them. When they are in-sync, that heightens our perception vs. when they are not, our minds struggle and find a way to reconstruct them in a way that makes sense (illusion). This can result in a sense of disorientation. However, on the plus side, this can make us more aware, more nuanced and more appreciative of our regular physical reality.
With this thought I started experimenting with the audio visual realm of our perceptions.
Audio guides time. Vision guides direction.
1. Beats and pop
2. Digital direction
3. Analog direction
Going forward I would like to explore this thin line where we are able to recognise the patterns yet able to let go of the need to synchronise them.