top of page

Velociscope (New Media Cruiser-Class Thesis Project 1) 2015.04

Stepping out of the museum.​

Adding element "speed" into works.

​Walking into the city and interacting with citizens.


We realize that art is a powerful product of civilization. In cities, art is usually displayed in quiet and peaceful galleries for appreciation. When viewing art, some people are pleased while others are inspired. However, this experience is short lived, people gradually forget about experiences they have had with art after leaving the galleries. They continue on with the hustle and bustle of city life, continuing to search for comfort within the gray urban infrastructure — Taipei City. As a result, the members of New Media Cruiser Institute proposed a different form of art; we decided to start a series of projects in 2015 using "Speed", a core value of this generation, as the motif to our research; we studied the potential of art to exceed the limitations of presentation in galleries. Could art be brought into the city walls through actions? Could it face the crowds, get involved, or even communicate with the city?


In a generation of information explosion, our awareness towards physical length and consciousness are dwindling due to intensifying speed, the meaning of life and experience of art is lost. We attempt to visualize the correspondence of speed and time to bring awareness of the present moment to the pedestrians on the street by being a part of the displaying images of phantoms moving art on screens.


The displayed images are the pedestrians’ own phantoms on screen where the present moment becomes a memory by projecting the next instant image on screen. Images of every moment revealed, then vanish into time and speed on a virtual screen, letting the audience experience changes of time and space within the same time frame.


The Velociscope uses a webcam to capture real-time videos processed by Processing to show on the screen. The screen is divided into two sections by an invisible line in the center. The right side shows the unprocessed and processed video of the present. The left side shows macros of every captured moment. Pixels are extracted from the center line, being extended and enlarged into monotone patches, resembling the effect caused by a slow camera shutter. These fragments of time elapse continuously, and may compress or extend vertically due to speed.​


  • Coded the Processing code of using a webcam to capture real-time videos where pixels in the center line of videos can be extracted while being extended and enlarged into monotone patches, resembling the effect caused by a slow camera shutter, shown on the LED screen of the moving lab on a truck.

This part of the processing code is to set the canvas size of 1920px width and 640px height as well as the size of camera. Furthermore, the following, defines video SliceX (where the camera will grab pixels) and the destination (input grabbed pixels on screen).

By setting different value to control the scenes. First, map "float a" to be 0~960px; "float b" to be the width of video minus one to 0. Second, map the "float c" to 640~1 and the "float d",  0~320. All this numbers are set for controlling the real-time video, extended parts of video and the parts of monotone patches. In addition, I define the pixels of destination to equal the video's pixels, so that I can control the "destination" in a way I want.

Here, I use scale as (-1,1) to mirror the scene of camera from right to left. Also, setting the real-time video to show from center (960px) of canvas to the right (1920px).

This code is to describe extended pixels. It will extract the pixels from "destination" (where the camera grab the pixels) and place it on center of canvas to the right (the value of "float c").

This code is responsible for the moving scenes. That is, by capturing the scenes of canvas from 1px to 1920px and placing it on 0px to 1920px, it will make the scenes left-shift 1px constantly. As a result, it can create the effect that mono patches keep toward the left of the canvas.

  • Tested the effect of Processing code on the road by using hand-made pillars bounded with cameras to simulate the real height of the truck.​

  • Assembled the camera on the truck and in charge of all coding functions of truck’s movements on streets.

Materials​: Processing/ Arduino/ GPS/ Camera/ Truck/ LED

bottom of page