Archive for March 2010

Detection with Processing

March 14, 2010

Right I’ve been going around in circles but I am now hoping this is the last time. I’ve decided to go back to Processing as it allows me to use the feed from the web cam and manipulate it using little code which I am told is more reliable in this context compared to Flash. Soo even though I found this really cool example (try the demo if you have a web cam): http://blog.soulwire.co.uk/code/actionscript-3/webcam-motion-detection-tracking which uses Flash, I am going to actually use Processing which is actually easier to understand (now that I look at it properly) and that will probably take less learning to adjust.

The official info on Processing can be found on their web site: http://processing.org/
And here are a couple lines from their home page:

“Processing is an open source programming language and environment for people who want to program images, animation, and interactions.”

I will be using OpenCV (a library which is imported into Processing) to get a basic black and white image of the floor area to project onto my work which will be mounted on the wall. This way the viewer can literally move around on this floor  in order to alter the areas that are illuminated and try and play around with manipulating the shadows and reflections that are projected from that. Obviously it may not work as well with just heads and shoulders in view (remember it will be an aerial view) but that’s something I need to test in the next couple of weeks.

Here are a couple of test images from the camera view as processed using OpenCV and the blobs() method which by the looks of it calculates where whole ‘blobs’ are in the image, and constantly checks for where lines merge or disconnect in objects, so if one object comes in front of another then it would change where the edges are detected (I think).

I changed the contrast and colour from the default which is grey and white and used a book cover and a cd for the following to give a better idea of how it cuts out things that don’t have whole areas defined and focuses on those that do:

Image of book cover with Blob() function using OpenCV in Processing

Image of an intricate book cover as seen using the blobs() method with OpenCV in Processing

Image of CD processed with OpenCV using the blobs() method

The results aren’t as predictable as they appear in the above images though. They constantly change and it’s hard to pinpoint exactly how movements and changes in the view affect what is then displayed. I will need to play around with this quite a bit to make it less sporadic and more intuitive so that it works better in the show.

The code is really short and simple and there are quite a few examples on the OpenCV resource page: http://ubaa.net/shared/processing/opencv/

You don’t need to even be able to understand this stuff to try it out. You can simply download the necessary software and additional libraries from the two links I’ve provided. After installing you can either try out the examples already in the Processing library (really cool ways of producing generative art with this) or by pasting in code found on the two sites to view and play around with the results. There’s also loads of examples where you can interact with a cursor or movement through a web cam to change the visuals. There is much fun to be had!

Pyramids

March 5, 2010

I’ve been experimenting using some good old reflective card to create 3D shapes that could mirror well as collective components to a larger shape.

I started off with just the outer shell and got a bit carried away with this initial shape and how it worked with my reflective pattern sheet:

This is a head-on view looking into the pyramid shell, the inside is reflective the outside just white

This is the pyramid before adding the back panel. The pattern is mirrored in interesting ways. The top bit looks like a scary eye!

Pyramid - top, angled view. Placement of the top of the triangle means the pattern is better tessellated and therefore works better in creating an infinite pattern within the pyramid

Invisible pyramid - reflective panel added to outer wall

This one is my favourite because with the addition of the outer reflectivity an illusion is created whereby only the edges of the 3D shape is visible. The rest of the shape looks like it’s semi-transparent and showing the underlying pattern when it is actually a reflection of the pattern around all sides including the inside. I really like this aspect and would love to play around with it some more if I get the time.

Moving on, I started making smaller pyramids to fit inside the large shell to try and recreate a tesselated look without a 2D pattern.  Here’s how I constructed it:

Construction process for reflective pyramids structure

And here’s a better view of the final structure – a sort of open-ended pyramid filled with smaller pyramids which were also open-ended:

Pyramids!

It’s nothing major and only a small tester model but on a large-scale I think it could look really good. I noticed that with there being gaps between some of the edges it wasn’t such a bad thing as it allowed light to come in through the back and illuminate the inner space and so allowing the reflections,  symmetry and geometry to show more clearly. It’s especially nice to look closely as if being enclosed by the reflected walls and getting an impression you could be encompassed by this structure. If it was life-size, sitting inside would be quite entrancing I think.

In a way it would be really good to be able to create many different pieces that reflect the developments in my research but that would be like setting up a massive exhibition of my own! (Maybe one day)

We still don’t know for sure how much space we get for our individual work in the end of year MA show. I’m hoping to get a proto-type completed soon so that I can not only know for myself what scale would work best but also use the proto-type to indicate scale and usability to others.

Practical solutions

March 3, 2010

I don’t think I realised how worried I was becoming about the need to learn a load of electronics, using the Arduino, Processing and connecting up all the potential sensors, not until I realised how relieved I was at hearing of alternatives that might work just as good if not better.

I attended Leon’s electronics workshop and spoke to a previous student a day before about my aims and they both suggested Video tracking as a solution. This sounds perfect! The potential is huuge.

Also, I got a quote back from a company about a matrix array sensor – guess how much…$3000 – $4000!!!! lolol yeh I know.

Ok so back to the solution, I found some really cool examples of video tracking in conjunction with Flash which is how Leon suggested it be used. The ones on this page are more like games but with a bit of editing could do the trick: http://www.discombo.co.uk/cam-experiments.htm

Also because it uses ActionScripting (which I at least recognise) it might be more realistic for me to pick up in practice.  As this practical prep is taking up much of my time I find myself blogging less. However, there’s a lot going on in this brain of mine (only some of it daydreaming) so here’s hoping it comes together soon so that I can go back to the aesthetics of the work and spend a good amount of time on that again.

Thinking things through

March 2, 2010

All this electronics stuff is exciting but weird. I’m still not sure what I’m doing but feel I’m making tiny bits of progress in trying to find suitable solutions for my design, which btw is actually quite ambitious. But if I don’t give it a shot I’ll regret not trying so am going to anyway.

I think there are some specialised products out there that could be better for use in my installation but these are either in other countries or only used in major manufacturing industries. I’ve contacted a few people who have either made their own or who produce these products and am waiting in hope that they will be able to assist me with my work. There are a few examples of people making their own fabric sensors on YouTube which is my plan B.

The basic sensors come as switches or resistors for singular triggers. So you can imagine that  if one person was to stand on the flooring with their weight detected by the sensor then this would send a message to the computer to project light onto the sculpture creating the reflected projection that I am aiming for. However, what if more people come and start walking on the flooring? Would I only be able to send one message and therefore only have one projection of light? Would there be a way to make all sensors activate projections through a single application? So this is my current predicament. I don’t want the work to mess up because there are too many people interacting with it and I don’t want to restrict it so that only one or two people can interact with it.

I am setting myself a deadline for the end of march to make a smallish prototype. There are two main factors that I need to test:

1) electronic set-up, making the sensors work

2) communicating between input and output in order to activate projection

This will then lead to me being able to figure out the scale to which I can build the actual sculptural work and restrict the area in which the projection occurs and co-ordinate this with the area that the sensors cover. I really hope all this works out!

————-