interactive systems: scotrail project
for this project we are asked to contribute to ideas developed from the past year's year 3 students by putting those ideas into completion. The builds will be then eventually assessed by the client and put in place in an exhibition that should take form in the summer 2024.
I decided to step back from creating the content to be displayed and instead dedicate myself in the more technical practice that involves building some of systems responsible for outputting such content. I believe that this was the best option for me and the group because first of all because there's better and more capable people at creating the content than me in the class and secondly because I'm already quite experienced in building such systems for my personal job, and have also a good background in projection mapping.
All in all I think this project was very well handled by all of my classmates and we'll see ahead in this page how there's been a good deal of coordination between the "content creators" and me, ensuring a couple of satisfying and robust works that are ready to be displayed.
I worked on the displaying systems for the works titled "Tickets" and "Ghost Objects". The first work is mainly a projection mapping with varying slides (I call it an over-engineered power point lol) while the second is a pepper's ghost illusion with, again, varying slides. Both works have an info-graphical nature. They have to display illustrative and written information clearly, at an approachable speed, and do it in a compelling manner.
My most important consideration regarding both of these works is that they are relatively simple. Displaying information through slides is notoriously easy (I'm not talking about creating the info itself, that requires some skill indeed: research, 3d modelling, video-editing, etc) BUT there are a few details that complicated the operation, forcing me to adopt an approach that allows for inter-changeability.
The main complicating point is that we couldn't really rely on a clear layout for any of the works. We couldn't run tests in-situ nor re-create the in-situ experience in a different environment to finalize the works appearance. Basically the main difficulty on my end was to make a work that doesn't have a definite form but that also allows me to setup its form quickly once it's time to install. This is why I chose to use TouchDesigner, because while I do agree that a work like this could've been approached in more conventional animating software like AfterEffects, such software doesn't allow the same amount of flexibility, considering the volatile nature of the installation.
I'll now go through each of the systems, starting with the Tickets.
the tickets
To put it briefly, the work consists of the following: a multitude of rectangular shapes are suspended in air with the use of fishing line. On these shapes are displayed old scotrail tickets showing some historically relevant locations, followed by some written and photographic elements in order to contextualise these locations.
The most important thing about this work is creating a script that describes at what point each element comes into place, where its placed and for how long. In this case, the script for each panel goes as follows:
- step one, a ticket is displayed for a certain amount of time;
- step two, one of the tickets is "selected", displaying the animation of the journey the train would take to get to the selected location, the animation will cover all of the panels;
- step three, once the train reaches the destination, information is displayed. One of the panels will be used to display written information, while the others will show some photos and postcards of the location.
- step four, return to the initial cycle, hence displaying all of the tickets again;
- step five, now a different ticket is selected and the loop continues selecting all of the tickets displayed.
Given this script, we can describe the system as an array of triggers for deciding at what point something appears and disappears to be substituted by something else, and a few simple transitions to ensure a smooth and continuous narrative.
I drew a timeline of sorts that will help me visualize one cycle of total animation:
INSERT PICTURE OF TIMELINE DRAWING HERE
From the drawing we can tell that the animation will work as follows: [0 - 10 seconds] tickets are shown (with a fade in from black), [10 - 20 seconds] map is cross-faded in and then fades out to black, [20 - 35 seconds] location info is displayed (different images are displayed every 5 seconds, the name of the location is displayed initially, followed by one section of written info after 5 seconds and another section of written info 5 additional seconds after that).
Now that I have a clear idea of how the animation should go, I can start actually working in Touchdesigner. My finalised patch looks like this:
A few considerations about this software and why my patch looks like this. Touchdesigner's UI is basically non-existent in its natural form, it is at user discretion to create a network that is comprehensible - depending on the network's application of course there can be different stiles of UI such as a passive HUD to see what's going on or an interactive full-fledged UI. Over the summer I've had the chance to work on some complex projects involving projection mapping but also content creation and manipulation and/or interactivity. I've kind of fell into the rabbit hole of optimization, UI building and how these 2 things go hand-in-hand because of a fundamental rule of TD: the cooking.
Normally, a computer program follows a "linear" type of computing. We've learnt by using Processing how the computing happens from top to bottom, from left to right. Of course there's an exception to be made in the looping sections of the textual code, but it's safe to say that computation in a text-based programming interface follows a linear route. In TD things get sort of complicated. Cooking (which is a term for indicating computation) is non-linear, depending on the use that is made of each operator. This is why building a network can be done in 2 ways. There's the "easy" way, which simply involves dropping all of your nodes in one layer and work on your program in that single layer, then there's the more advanced way, which involves compartmentalizing each section, keeping different data-types away from others and being more aware of the cooking.
For sure the easy method is made mainly for sketching something on the go. It won't have you going crazy by duplicating your windows in different screens to better multi-task, or being super-conscious about naming each channel or each end-operator. But it's clear as water that the first method is synonym of dirty code, while the second method is a clean code, where cooking is applied to operators solely when it's needed.
I could spend hours talking about optimization in TD but this is just a brief on-the-surface point I wanted to make. Often in TD a legible program means a program that runs better. In this case I'm handling moving image in 4k in a projection mapping environment in real-time, and although my laptop's RTX3050 is indeed a good mid-tier graphic's card, it could be a daunting task without the appropriate measurements: you can see in the image above how there's one container called "channel_data" which is responsible for all of the different triggers according to the timeline, which in turn modulates 3 containers - these are the "content storage" containers (respectively "ticket_show", "map_show" and "location_show"), and that's all they do (with a couple more things as i'll show later). "texture_routing" does what it says, routes the texture to the needed panels (through the kantan mapper). I'll touch the matter of cooking more as I talk about the patch. Now I can dive into the "channel_data" container and explain what in the fudge is going on.
channel_data
Channel data is just numbers. They are handled by CHOPs (channel operators) which are the green nodes. In this container we can see that there are only CHOPs, and they all have a common point of origin.
The point of origin is the Timeline CHOP. This is the brain of the whole operation because it is linked to the timeline. In TD we can decide whether we want to work in real-time, hence detached from the timeline, or in a more conventional timeline based method. Now this type of work would call for a timeline based approach, but as I mentioned before there's no way for me to know how the finished work will be so I had to make a hybrid workflow, able to have precise timing coordinates while also allowing for quick editing in case things change. This is why I'm using the timeline CHOP. By declaring the timeline length I can be sure of when to trigger things, and by using a system of trigger and count CHOPS I can also be sure of what to change accordingly.
If I were to work based on the timeline only, I would have to know the precise positioning of my mapping and then render. While by having an hybrid workflow I can be present during the install, move a few things and then render.
The timeline CHOP shows my timeline as it progresses as 2 bars, the first bar is the timeline in frames, the second is the timeline translated in seconds. A select CHOP is plugged in to only make use of the timeline's seconds, for ease of use.
Color-coded network boxes help me to keep things tidy and legible. The first network box is the one containing the CHOPs responsible for the ticket's display.
here you can see 7 rows (one for each ticket AKA for each panel) of CHOPS. The first chop is a trigger. According to the assigned numerical value it will trigger its data output to go from 0 to 1. The chosen numerical value is when we want the tickets to be displayed, so in the case of the first trigger is just 1 - meaning that the ticket will be displayed after one second of animation.
after the trigger there's a rename CHOP. This is just responsible for naming the channel data appropriately. It's important that all of these channels are named in the same way, so that it won't confuse the referencing later in the chain.
all of the triggers will link together into a math chop which is responsible for combining the different channels into one in additive more. I chose additive more so that it will simply take whichever channel that is displaying one and displaying it. Considering that all of the previous triggers are set to work at different times, always only one channel will be put to use.
Under the first 7 rows of chops responsible for activating the ticket's output there are 7 more rows that instead are responsible for de-activating them. These are plugged into the "reset" port of a count chop, meaning that the first 7 rows will make the count go up and the other 7 will make it go to 0.
This count chop is the third most important operator in this chain. By setting it to Loop Min/Max and setting the limit to 1 we can ensure that it will only output 0 OR 1, and by inputting into its reset port we can further ensure that when the tickets don't need to be displayed the final data will be equal to 0.
finally, the 0 to 1 channel gets modelled accordingly in order to declare the position of our tickets in space. I'm doing so by using a math chop. In this way I can set the starting position of each ticket at -5 (out of frame) when the incoming channel is 0, and then set the displaying position in the Y axis (which is our variable, because it depends on how the panels are setup) whenever the incoming channel is 1. I decided to have the tickets scrolling up from down and I've used lag chops with varying lag values to make the scroll effect feel more organic.
this is basically the main gist of it all, and this method is applied to every single part of the work. I won't explain how the journey or the location info are displayed because the approach is very similar and it would just mean repeating myself 2 more times. I will upload my patch, and although you might not be able to see the output because of license limitations you will be able to see my comments in the network itself.
tickets_show
in this container lie all of the tickets that will be displayed on each panel. although this part of the network is quite simple, there are a few subtle things to mention. the first thing is that the tickets are actually videos that are of the exact same length in HAP Q codec. this is an important codec because it is uncompressed, so the software will have less of an hard time outputting it, speeding the cooking process. as mentioned before each cycle I need one ticket at a time to be "selected", this can be done by an aesthetic change in the image that highlights the selected ticket in some way. this could be done in many different ways, but to simplify the network I simply asked Fedre to edit each ticket video to show a "not-selected" and a "selected" phase. All I had to do on my end was to play the videos that are selected and halt the videos that are not selected.
the example below shows how a full ticket video would look like:
map_show
this is the simplest of the containers. it simply contains the video of the journey the train makes for each ticket. Initially we were planning on having one smooth journey ticket by ticket (as in: from glasgow to edinburgh, from edinburgh to aberdeen) so my plan was to have one single video that would encapsulate this journey, but eventually we found that it was impossible to get the appropriate tickets to accommodate for something like that, so I think that eventually this container's network will be looking quite similar to the "ticket_show"'s one (7 rows with 7 different videos of 7 journeys). Unfortunately I didn't receive the content for this network so I made do with a placeholder video of a random map I found on Pexels.
location_show
in this container all of the information for each location to be displayed is stored. Inside of this container there are more containers, properly named to correspond to each location.
inside of each container lies this small network. 3 movie file in tops connected respectively to 3 transform top. why 3? Because I want the text based information to be placed on a single panel, while I'll assign 3 panels for each image. Again the videos that are contained here are all HAP Q, all of the same length, and all contain some pre-made transitions. The MvI called "text" contains the textual info. It will show the title, then a bit of info, and then another bit of info. The 2 "visual" MvI contain a small slideshow of 3 pictures each, crossfading between the images every 5 seconds.
texture_routing
this is where the fun stuff happens. Here all of the textures are collected and sent to the kantan mapper. each panel gets its row as always, and each row gets a system of switches connected to select tops. These select contain the journey and the info for each location. The switches are fundamental for allowing me to change which texture to be displayed next. Another important thing that the switch does is that it doesn't allow whatever is not selected to cook, minimising performance.
there isn't much more to be said about this network, here's an example that I've made in class to experiment with how long it would take me to install the whole thing. although I only installed 3 shapes in this case, the network allowed me to adjust for the lack of shapes, and it took me about 20 minutes to map and render the whole thing.
Please note, all of the textures displayed are placeholders, plus the shapes look weird because I couldn't place my projector properly.
great job to fedre and ashritha for being my content creators for this work. and for being so patient with my strange requests.
the ghost objects
the ghost objects is basically a pepper's ghost animation which is supposed to showcase the single parts of a train named Kelton Fell. The parts would be highlighted and some labels would appear, informing the audience of the naming and the characteristics that compose a train of that era.
The procedure I followed is pretty much similar to the one adopted for the tickets: a pre-declared timeline with a system of time-based triggers that determine which animation to trigger at a certain specific time. Although this time I am working with 3d models instead of 2d videos.
the patch's structure is also similar to the previous one. Channel data is kept in the container on the the left, the content is handled on the container on the centre. The window is just for outputting the image.
Nina and Mehal worked on the creation of the model itself and they did an astounding job both in creating the model and giving ideas for animation. Although a train expert had quite a few disagreements with our model, I used it anyways as a placeholder.
inside of the channel data container lie a few other containers, aptly named. in the master channels container sit the data that's meant to modulate each part of the animation. the 5 containers on the side each represent a part of the train and are named as "whole" (which is just the whole train), "boiler", "chassis", "cab" and "wheels".
this is the content of the "master_data" container. the necessary timeline chop is set up in its usual way, by selecting the seconds that will be used for the triggers. ignore the lfo1, i forgot to delete it. the lfo2 instead is used to animate the pulsating bloom that we'll see later.
this are the channels contained inside of the "single parts" containers. to break this down, a select is used in order to reference the timeline data. in the first row you can see how 2 triggers are used into the count chop. One trigger activates the model to appear, the other one makes it disappear. a math chop models the data in order to define the Y coordinates of the 3d object and a rename is used to give a coherent name to our channel. Finally a filter smoothens the signal, and a lag applies an overshoot to the signal's smoothening, providing a sort of bouncy-ness to the motion.
the second line applies the rotation. I want the models to rise up while rotating once, allow some time to the labels to appear, and then rotate once in the opposite direction while falling down. To do this I use the triggers and count in the same way (applying a slight delay in my triggers), a math provides the rotation by modelling the signal to go from 0 to 360 (and naturally vice-versa), and again a filter+lag chain will provide smoothness and bouncy-ness.
these are the models inside of the visualizer container. to make things look a bit more "techy" I'll have to add some filters to them in TD. as you can see the network for the visualizer is very basic. all i had to do was organise the models to appear and disappear properly, apply some new material and then finish up the render chain with a couple of filters. kieron will take care of labelling each part because I was going mad wae it.
these blue operators are called SOPs (surface operators) and they manage 3d data. a brief chain for each of the models is needed just to make things a bit more clean. A transform sop is used to put all of the models in the same point in the 3d space. this is crucial so that we won't have to manually change each channel data to make a successful animation. each model gets its own geo comp. geo comp is basically your average 3d environment where you place cameras and lights and apply materials. One good thing about the geo comp is that you can add multiple ones while still having all of your models appear in the same single environment (unless you don't want them to do that).
a camera is needed for the rendering chain, and I setup a new material to my models, in this case, a line material to give that sweet wireframe look - there is also a dedicated wireframe material which is also way less intensive in performance, but I noticed how it would take every single triangle into account, making the models a bit confusing to decipher. the line mat (the yellow operator on the top-right) can also declare colors so I chose this light blue because it looked quite "techy".
finally a render is used to translate the 3d models into 2d space. An RGB key top is used to get rid of the transparent background, and the bloom comp (a custom made component) will apply some glow (modulated by the previously mentioned LFO). Finally a transform top is applied just in case the model needed re-fitting to the screen. That's it!