Tuesday, November 30, 2010

THE PERFECTION OF SUICIDE LIES IN AMBIGUITY









The suicide of Guy Debord on 30 November has led to the former Situationist being caught up in a number of discourses that he may, at one time, have viewed as distasteful. In the 'Guardian Weekend 1994: Review of the Year' (Guardian 31/12/94), Debord was name checked in the 'Those We Have Lost' column alongside two other suicides, rock singer Kurt Cobain and Great Train Robber Buster Edwards (other deaths noted included those of Derek Jarman, Richard Nixon, John Smith, Jackie Onassis, Dennis Potter, Kim Il Sung, Peter Cushing, Karl Popper and Keith Joseph). Clearly, Debord's timing was good because if he'd killed himself at the beginning of the year, the mainstream media may well have forgotten his suicide by its end.


Messages placed on the internet about the suicide included one from Edward A. Shanken who wrote: 'Guy Debord did not kill himself. He was murdered by the thoughtlessness and selfishness of so-called scholars (primarily trendy lit-criters) who colonized his brilliant ideas and transformed his radical politics into an academic status symbol not worth the pulp it's printed on...' This generated a few angry responses, the import of which was that Debord was not another Jim Morrison, Ian Curtis or Kurt Cobain who 'died for our sins.' Shanken didn't address the fact that Debord was utterly obsessed with the notion of 'recuperation,' and that as a consequence, he was to some degree responsible for all the uses made of his work. Debord's version of the Situationist International deposited a good deal of material with archives and museums precisely because it did not want to be forgotten by academia.


John Young used the Net as a soapbox from which to claim that Debord had worked for Mossad: ‘this dazzling and humbling association with real world power beyond the soft-minded literary and philosophical worlds totally mesmerized Debord... The elixitrate mix of sacred and profane literally made Guy drunk with intellectual stimulation and shared worldly risk... the intrigue and daring bond of high mind and base reality was an alchemic transformation of mental to physical like no head-wrought book could come near.' Unfortunately, the intertextual origins of this thesis were plainly evident in Young's claim that he'd learnt of Debord's spook activities from Philip Roth. Young even went as far as asserting that Debord had provided the model for the central character in Philip Roth's novel Operation Shylock: A Confession.


Meanwhile, Malcolm Imrie's obituary in the Guardian of 5 December 1994 absurdly claimed that 'with consummate irony, he (Debord) allowed his work to be republished by Gallimard, entering the pantheon of French literature, just as the pantheon was collapsing.' In the world the Situationists wished to create, such a panegyric would be viewed as supremely ironic. Suicide was an occupational hazard for the Dadaists and Surrealists, perhaps Debord hoped to realise and suppress this tradition by using death as a method of reintegrating himself into the avant-garde. In the meantime, death remains the ultimate commodity, a handy gimmick to help sell works of 'revolutionary theory' in an already over saturated market.



www.stewarthomesociety.org/debord.html

Dynamic, Projection-Mapped Topographies.



Reasons to be thankful: it seems we’re at the beginning of an explosion of projected digital imagery as medium, with the best yet to come. And some of the most compelling work right now deals with the most elemental qualities of this medium, how light and space interact.


Take the work of hc gilje. He shares some of his most recent projects, which include the elegant-looking theatrical projections at top:


I was invited by Trøndelag Teater to do a combined physical set design with video projections. It was an adaptation of the Norwegian literary classic “Fuglane” (“The Birds”) by Tarjei Vesaas, with Harry Guttormsen as director. I created an organic physical form, which combined with the videoprojection became a very dynamic landscape.


Cool as tech like Microsoft’s Kinect are, I find myself drawn to work that focuses on the sparest elements, visual etudes in form and composition. I’m particularly interested in this having been following the writing of John Maeda, whose thinking helped inspire Processing (and, by extension, OpenFrameworks and lots of other stuff).


The other work hc gilje shares to me fits some of that work. How much can you do, in rhythm and space, using only a single line?



snitt (hc gilje 2010) from hc gilje on Vimeo.


An installation for Galleri 21 in Malmö.

A straight line moves slowly through the three rooms of the gallery space, cutting the space into different sections (snitt). The movement of the line, “attacking” the space from different angles, focus the attention of the viewer on the physical qualities of the space.


The physical properties of the galllery space (the walls, ceiling,floor, door openings, light fixtures etc) modulates/breaks up the straight line into a continuously evolving pattern of line fragments, depending on the position of the viewer and the angle of the line in relation to the architecture.


The solo show explores the concept of “line-space” — a fascinating proposition, that the one-dimensional line can define a three-dimensional volume. More in his blog post:

Conversations with spaces: snitt


If you do happen to live in a town like Malmö, Sweden, or Oslo, I imagine you’ll have quite a lot of time this season for quiet reflection, and cause to do some projecting of light – the sun spending very little of the day getting in your way. Let us know what thoughts you have, and what light you project.



SOURCE: http://createdigitalmotion.com/

Kinect Hacking : Why it matters?


Interactive Puppet Prototype with Xbox Kinect from Theo Watson on Vimeo.When Microsoft gobbled up vision technology and announced they were channeling their own research into a product for their game console, artists, researchers, and hackers lamented. It seemed the tech might be destined only for a handful of mainstream game titles.
Hours after the product launch, however, and one open source bounty later, it was clear the opposite was happening: Kinect was opening to new possibilities. Some of the world’s leading visual experimenters, many of them regulars in this site’s stories, were quickly pulling in data and reimagining what the device could do. And that’s just in the first days: given its sophistication, the real potential lies ahead.I pulled together a number of the artist-hackers to get their thoughts:Phil Torrone talks to us from Adafruit Industries, who put up the bounty for the project and contributed to the EFF to protect the rights of hackersTheo Watson, OpenFrameworks co-originator, is one of the original hackers and has built Mac supportMemo Akten, OpenFrameworks contributor, is building expressive and artistic applications of the techKyle McDonald, artist and visual researcher, is working with massive clouds of point data, building on his previous work in 3D scanning
Dan Shiffman, Processing guru and NYU faculty, is working on tools to make this more accessible to Processing and Java coders.

Adafruit on the Competition to Hack Kinect


Phil Torrone of Adafruit explains what went on behind the scenes as Adafruit Industries offered a bounty to hack Kinect.>
CDM: What was important about this particular project?
Torrone: The results speak for themselves, the creative potential was unlocked.
When did you actually make the decision to commit to this?
The day before the kinect was launched in the usa.
Have you been surprised by anything that’s happened? Was this the pace of progress you anticipated?
We never underestimate the creativity and passion of people who and do love open source.
You of course gave some cash to the EFF and not just the winner … have you had conversations with EFF about how to protect artists working on the project, or the legality of the work?
We did not talk with EFF at all prior to this effort; we did let them know we were sending them $2k after we declared a winner.
With so much going on, what’s the best way for interested parties to keep track of what’s going on?
Likely the Google Group
How can someone best contribute?
There’s a google group, there’s GitHub where we put our data dump and code.
https://github.com/adafruit/Kinect


Kyle McDonald reaches into 3D virtual space, represented by a massive point cloud, through his Kinect code. Photo (CC-BY-SA-NC) Kyle McDonald; used by permission.

Where should people go to learn more about this stuff?


Shiffman: In terms of doing Kinect with Processing, I think learning the basics of Processing first (duh), with a focus on image processing is probably good:
http://processing.org/learning/pixels/
Watson: There is an openFrameworks wrapper being developed:

ofxKinect @ GitHub

…and active forum threads:

libfreenect discussion @ GitHub

openFrameworks forums
Also the main libfreenect development is happening on GitHub:

libfreenect
There is a big cleanup coming to the api so things might be in a bit of a state of flux for the next few days but hopefully soon we will have super solid drivers/apis for all platforms.
Akten: I’ll be posting my little demos (whenever I find a moment, which will be rare for the next few months unfortunately :S) at

https://github.com/memo/ofxKinect-demos and of course my blog memo.tv

You’ve probably seen the post on CAN [creative applications], has a good summary of the early demos and the history of how the opensource drivers came about (Hector etc.)

http://www.creativeapplications.net/news/kinect-opensource-news/
And I saw a tweet that someone had it working with Cinder.
McDonald: For my work in general, see http://kylemcdonald.net/
For my pre-Kinect 3d scanning work, see:
http://www.flickr.com/photos/kylemcdonald/sets/72157613657773217/

http://code.google.com/p/structured-light

https://groups.google.com/group/structured-light
With Kinect, everything I’ve done with 3d scanning for the last two years is starting to take on a new meaning…

The best place for following Kinect stuff is:
1 openkinect google group

2 #openkinect on freenode (super active discussion)

3 the github wiki


Why hack the Kinect in the first place?


McDonald: It’s essential that we develop drivers and libraries for Kinect, because we have to decide what new technology means to us.
Kinect has taken a technology out of academic labs and defense agencies, and put it in our living room. now we need to decide where we want to point the camera.
Shiffman: A cheap (relatively speaking) “3D” camera is killer technology for the interaction design / computational art community. This kind of tech has been around, but it’s either been too hard to find or prohibitively expensive. I think that you will see a ton of creative uses (in digital art, exhibition design, assistive tech, etc.) that you wouldn’t find if it was only used for console gaming.
Watson: It’s a really amazing piece of hardware for a really affordable price. To put it in perspective, I currently have a commercial-depth camera on loan which produces a similar quality depth image and it retails for $7000! That is really way out of reach for most people who might be hobbyists, artists or researchers, but $150 is incredibly cheap for what the technology allows you to do.
Akten: First, check out Kyle’s little poem :)
For me, it’s very simple. I like to make things that know what you are doing, or understand what you are wanting to do, and act accordingly. There are many ways of implementing these ideas. You can strap accelerometers to your arms and wave them around, and have the accelerometer values drive sound or visuals. You can place various sensors in the environment, you can use camera(s) to track movement etc. Ultimately, you create an environment that ‘knows’ what is happening inside it, and responds as you designed and developed it to. What excites me is not the technology, but how you interpret that environment data, and make decisions as a result of it. How intuitive is the interface? You can randomly wire the environmental parameters (e.g. orientation of arm), to random parameters (e.g in audio and/or visuals), and it will be fun for a while, but I don’t think it will have longevity, it won’t be an *instrument* that you can ultimately learn to play and naturally express yourself with. In order to create an instrument, you first need to establish a language of interaction – which is the fun side of interaction design, but you always have the technical challenge of making sure you can create a system which can understand that language. It’s too common to design an interaction, but not have the technical capabilities to detect or implement it – then you have a system which reports incorrectly, and makes inaccurate assumptions resulting in confusing, non-intuitive interaction. So you need a smarter system, and the more data you have about the environment, the better you can understand it, and the smarter, more informed decisions you can make. You don’t *need* to use all the data all the time, but it is there if you need it.
Kinect is ultimately a depth-sensing camera. To put it simply, it returns a normal RGB image just like a webcam, but for every pixel in the image, it also returns a ‘distance to camera’. This kind of tech has been around for a while, but very expensive (minimum thousands of dollars), and definitely not a consumer device, more for labs, robotics, military etc. That depth information, is a ton of extra data. With that extra data, we are a lot more knowledgable about what is happening in our environment, we can understand it more accurately, thus we can create smarter systems that respond more intuitively.
One point which is often overlooked – which is a very important point – is not only ‘what can you do with the Kinect that you couldn’t before’, but ‘how much simpler is it technically to do something with the Kinect, as opposed to using other consumer devices’. This really is a very important point. A simple example is the recent rough demo I posted of drawing in 3D with your hands.

ofxKinect 3D draw 001 from Memo Akten on Vimeo.
That is completely possible to do pre-Kinect. You would need two webcams, you would need to setup your lighting quite specifically. You would want control over your background and overall lighting of the space. And then you would need a lot of hairy maths and code. With the kinect, you just plug it in, make sure there isn’t any bright sunlight around, and with a few lines of code you have the information you need. So now that interaction is available for developer / artists of *all* levels, not just hardcore math geeks – and that is very important. Once you have loads of people playing with these kinds of interactions (who pre-Kinect would not have been able to) then we are bound to see loads of really innovative, fresh applications for it. Sure we’ll get a ton of “pinch to zoom and rotate the photo” demos which will get sickening after a few thousand, but people will be developing ideas that you or I would never have thought off, but instantly love – which in turn will spark new ideas in us to go off and play with – which in turn will feed others.
It’s still really early days yet, it’s just been a case of getting the data off the Kinect into the computer, and then seeing what actually is that data, how reliable is it, how is it’s performance, what can we do with it. Once this gets out to the masses, that’s when the fun will start pouring in :)

What might people do with these tools as artists?


Watson: There is quite a lot that it can be used for. For interactive installations, we are often dealing with trying to track people in a space. Typically this requires careful lighting and IR cameras and it can be quite a tricky issue, but with the Kinect the depth image allows us not only to track people but understand where they are in relation to each our in z-space. This is just one application however, another really nice feature is that it has pixel matched color and depth cameras and this could allow for a ‘greenscreen-less’ live greenscreening. And then of course there is its use as a 3D scanner, for building depth maps, understanding the space around us etc and more possibilities than I probably realise.
Shiffman: All sorts of things I can’t possibly imagine! (Just the fact that having depth makes background removal so easy is killer for my students.)
McDonald: I’ve noticed tendencies to work at very different levels of abstraction.
Some people are most interested in the raw data, the inherent glitches, the aesthetic of 3d scanning.
Others are interested in slightly generalized data, maybe the idea of ‘scenes’ that are being captured and analyzed, reconstructed. Some people are interested in specific applications — object recognition, pose estimation, gestures. these are the most abstracted.
I expect work to come from all different levels, in every different medium.
Sculptors will record and build unusual models of spaces informed by 3d scanning, spatial mash-ups will be standard fare, 3d printing for 3d slit scanning. motion spaces, negative spaces. paths through space over time.
Sound artists and musicians will use the device to control standard audio parameters, or use the values as input parameters to complex synthesis environments and for controlling spatialized sound with large speaker arrays.
Photographers will work with long exposures in combination with 3d-reactive projection to augment layers of the space over time.
Interaction designers will invent new gestures and modes of interaction specifically targeted at the strengths of the sensor.
Interactive art will experience a minor renaissance as a variety of tasks that were previously very difficult become very simple (e.g., tracking someone against a background that is the same color, or even tracking someone against a moving background)
… etc., etc. :)

XBox Kinect running on OS X ( with source code ) from Theo Watson on Vimeo.

What’s technically possible with the libraries now; what’s coming?


Watson: At the moment, we can get back the depth image and color image from the two cameras, access the motor, LED and the accelerometer of the device. Some developers are now working on accessing the four microphones which allows for location of sounds in 3D space. Also, a big part of the Kinect as it relates to the Xbox is the full body skeletal tracking, which from a researcher or artist’s perspective is very valuable feature. This is implemented in software on the Xbox and is the result of many years work by some of the top people in the field. A big part of the future research will be at the software level developing tools that build of off and extend the functionality of the hardware, like open source implementations of the realtime skeletonization code.
McDonald: The general rundown is that Linux is fastest, OS X is 5-10 fps behind,

and Windows is just starting to work.
ofxKinect was originally developed by Dan Wilcox and Theo Watson, with some minor contributions from me, and is now also being developed by Arturo Castro. It runs well on OS X and Arturo is still adding Linux support.
https://github.com/ofTheo/ofxKinect
Right now it’s only possible to get the RGB and depth images, and to get the depth image in centimeters (which is not what the sensor returns by default). Next will be alignment of the RGB and depth images, and of course making it cross platform. Other suggestions are on the OF forum.
Shiffman: Right now the library just returns two pixel arrays (640×480 RGB image and 640×480 image with depth mapped to grayscale). My to-do list is (a) make all the raw data available, (b) optimize for speed, and (c) add any little analysis tricks / features that might be particularly useful. Basically, anything people do with the openkinect project and OF, I’ll try to add as a feature for Java / Processing.


Stay tuned to CDMotion for more… and let us know if you have specific comments or questions, or have seen work that inpires you. Ed.
More reading…
Fantastic round-up of what’s happened so far from our friend Creative Applications Network:

Kinect – OpenSource
Memo reflects on his blog…

Kinect – why it matters
And on Music, I’ve got more for anyone interested in MIDI or C#/.net:

Kinect with MIDI

SOURCE: http://createdigitalmotion.com

Archigram Archival Project


The Archigram Archival Project makes the work of the seminal architectural group Archigram available free online for public viewing and academic study. The project was run by EXP, an architectural research group at the University of Westminster. Archigram Began Life as a Magazine produced at home by the members of the group, showing experimental work to a growing, global audience. Nine (and a half) seminal, individually designed, hugely influential, and now very rare magazines were produced between 1961 and 1974. The last ‘half’ was an update on the group’s office work rather than a ‘full’ Archigram magazine. The Six Members of Archigram are Peter Cook, David Greene, Mike Webb, Ron Herron, Warren Chalk and Dennis Crompton. Cook, Greene and Webb met in 1961, collaborated on the first Archigram magazine, later inviting Herron, Chalk and Crompton to join them, and the magazine name stuck to them as a group.



More Than 200 Projects are included in the Archigram Archival Project. The AAP uses the group’s mainly chronological numbering system and includes everything given an Archigram project number. This comprises projects done by members before they met, the Archigram magazines (grouped together at no. 100), the projects done by Archigram as a group between 1961 and 1974, and some later projects.

How it would be, if a house was dreaming...

By far my favourite facade projection. This one beats all others by 555 Kubik

555 KUBIK | facade projection from urbanscreen on Vimeo.

Digital Flesh


Making Future Magic: iPad light painting

Making Future Magic: iPad light painting from Dentsu London on Vimeo.


Funky Forest





Funky Forest - Interactive Ecosystem from Theo Watson on Vimeo.

Funky Forest is a wild and crazy ecosystem where children manage the resources to influence the environment around them. By using their bodies or pillow "rocks" and "logs", water flowing from the digital stream on the floor can be dammed and diverted to the forest to make different parts grow. If a tree does not receive enough water it withers away but by pressing their bodies into the forest children create new trees based on their shape and character. As children explore and play they discover that the environment is inhabited by a variety of sonic life forms and creatures who appear and disappear depending on the health of the forest.  As the seasons change the creatures also go through a metamorphosis.


Emergent Urbanism: MadDecent - MajorLazerApp - FaceTracking



"Emergent Urbanism: MadDecent - MajorLazerApp - FaceTracking: 'Here i
s a quick application Vik made using the OpenCv with processing (JAVA). ..."
Download it here!
http://emergenturbanism.blogspot.com/