Portfolio Update.

WLS Crew
Today is Saturday and I decided it was time to update my portfolio. I figured since 3 sites I was apart of have launched since I have updated it I should take the time to put some new work in there. I worked for some awesome clients over the past year and the projects turned out great, thanks to the talented team at welikesmall and the agencies we worked with on the projects. The photo for this post is the team at welikesmall at Grinders in SLC eating some delicious cheese steaks. Check out some of the welikesmall team’s sites in the right side bar.

Check out the projects here: portfolio.coreyhankey.com

I have been lazy

My previous post was very similar to this. My good friend Phil called me on it so I guess its time to actually write about discovery. Any way, its been a long time and I have done alot of interesting things since I last wrote here. So without further interruption I am going to attempt to write one entry per week that pertains to technology or something interesting I did or experienced. To spice this up I have included some photos from my experiences over the past year.

Keeping It Simple

This blog has been relatively dormant and I am well aware of it so I am writing this quick post to motivate my self spend more time writing about things other than “experimental” programming. Its time to look at the real concept of randomizing discovery and show how discovery really can be random. I have had some interesting projects at work that I have learned some tricks on so over the next few weeks I am going to spend some time preparing some ideas on those.

New Work

I have updated my portfolio with some new professional work that went live recently. I am stoked about the recent launches so I figured I would write an entry. I updated my portfolio and the screen shots are here.

The first project that launched was an old one that has been idle until now for Pergo North America. Struck Creative did the entire site and I spent the majority of my time building the project planner section of the site. Its the result of several months worth of production and about 6 weeks of development to allow users to see what their room might look like if they installed Pergo flooring. Here is the link!

The other project was a portfolio site for Cliff Freeman and Partners in NYC. It’s a chat bot site that I was the lead developer on and developed both the front and back end of the site. It was quite an undertaking and the final product gave the client the ability to create unique ways for the user to navigate the site based on his or her communication with the chat bot. The back end was written in Django and python ended up being the perfect solution since there are so many requests to the server. Overall an incredible learning experience that challenged me develop strong optimization habits. Project can be view here.

Video Paint Debut


Video Pant .1 from Corey Hankey on Vimeo.

After a summer full of research and a not being able to spend the time I wanted on this project I have finally gotten the initial prototype where I wanted. I am pretty happy with the initial result and I am looking forward to adding some different ways to interact with the video as I move forward. The whole process takes a grid of particles that get their color from the current frame of the video. These particles then react to the positon of the mouse which determines whether they are visible or not.

Video Processing.

After spending some time at the Bumbershoot festival in Seattle for labor day weekend I was inspired/enlightened by the visuals that were being displayed on the led screen behind the performers. At times the visuals were interesting and other times the visuals were pretty lame.

This series of events inspired me to look into how a video experience could be created that would be unique each time it was viewed. It would be really interesting if a the video cut or transitioned based on a set of rules that were influenced by different factors. One factor could be sound. Another could be a change in color values in a video feed. Another could be an ir camera reading the crowds hand movements.

Its a concept I want to move forward with but I figured it would be good to document the entire process and start with my inspiration for the concept.

Stone Temple Pilots

Stone Temple Pilots.
Bumbershoot
Band of Horses.

Detecting Motion.

After taking a little time to research how other people were creating motion detection algorithms I decided to go with using the difference blend mode to detect what was changing from frame to frame. The first few tests I ran were very efficient but the results were not very accurate. I created a previous and next pixel array that stored the two images’ data and then blended the two images together into a new pixel array.

Now that I had all the pixel data I took a closer look at the convolution filter section in the Processing book. Convolution filters enhance certain features in an the image based on a multi-dimensional kernel array that is used to manipulate the pixels. After a little work and some experimenting I found a kernel that enhanced the motion changes in the image.

The problem now was that this process was very processor intensive and wouldn’t allow for more features to be added to the program. I shrank the camera capture dimensions and ran the algorithm on the new pixel data. Then I just scaled the final pixel data to fit the screen as it was drawn. This change took the load on the processor down from 50% to 27% while keeping the results of the motion detection.

Here are the Results (application zip): Motion Detection 01

[sourcecode language='java']
import processing.video.*;
Capture cam;
PImage prev;
PImage cur;
int count;
// convolution kernal
float[][] kernal = { { -1, -2, -1 },
{ 0, 0, 0 },
{ 1, 2, 1 } };

void setup() {

size(640, 480);
count = 0;
// If no device is specified, will just use the default.
cam = new Capture(this, 320, 240);
frameRate(24);
// create previous and current images =====
prev = createImage(width/2, height/2, RGB);
cur = createImage(width/2, height/2, RGB);
prev.loadPixels();
cur.loadPixels();
}

void draw() {

if (cam.available() == true) {
cam.read();
// capture every third frame into the previous image
// this can be changed to get more drastic results =
if(count > 2)
{
arraycopy(cur.pixels, prev.pixels);
count = 0;
}
count ++;
cam.loadPixels();
cur.pixels = cam.pixels;
prev.updatePixels();
cur.updatePixels();

// calculate the difference in the 2 frames ==================
cur.blend(prev,0,0,width/2,height/2,0,0,width/2,height/2, DIFFERENCE);
cur.loadPixels();

// create a new image to store the pixel data in =================
PImage eImg = createImage(320, 240, RGB);

// run convolution filter =============
// from book : Processing pg. 360 =====
for(int y = 1; y < 240-1; y++)
{
for(int x =1; x < 320-1; x++)
{
float sum = 0;
for(int ky = -1; ky <= 1; ky ++)
{
for(int kx= -1; kx <= 1; kx ++)
{
// calc next pixel =========
int pos = (y+ky)*320 + (x+kx);
float val = red(cur.pixels[pos]);
sum += kernal[ky+1][kx+1]+ val;
}
}
eImg.pixels[y*320 + x] = color(sum);
}
}
eImg.updatePixels();
// scale image up to fill th screen ===========
scale(2,2);

// draw manipulated data to the screen =========
image(eImg, 0,0);

// thresh hold filter converts to black and white only ====
filter(THRESHOLD, .95);
}
}
[/sourcecode]

Brownian Movement Gets Moving.


Brownian Lines from Corey Hankey on Vimeo

Its been a while but I decided it was time to get back to it. After taking some time to read more from the processing book I discovered that processing renders at the a specified frame rate. Unlike flash which will skip to keep up with the processor, processing renders can be saved out in an image sequence similar to a 3d program like Maya or Cinema4D.

This concept gave me some inspiration to create more complex animations using the saveFrame() method. I started off by collecting my colors from images in my photo library containing things that I enjoyed like my wife or the great outdoors. I used a for loop to create 2000 lines on the stage at when the program was started and then grabbed a brew while processing rendered out the 300 frames that I specified.

I have included a few vimeo videos that I posted but…… they just don’t do the animation justice so I posted a quicktime of the blue render here: Brownian Quicktime.
Vimeo version

Screen shots:
Color from my wifes portrait.
Brownian Lines Animation

Colors from a day of skiing:
Brownian Lines Get Moving

Adding Vector Math

Brownian Movement get vectors

After a weekend of skiing with my wife I decided it was time to get back to it and add some controls to my previous experiments. After tweaking the last few sketches I decided that I did not have enough control over the brownian particles that were being added to the stage. I took some time to look at how I could apply forces to the particles which quickly lead me to vector math.

Comparing vector math on paper to vector math inside a programming environment seems like a formidable challenge but it was easier than I expected. With a little trig and some basic vector equations I was able to create a pulling force on each particle that would always pull the particle towards its inital position. I have posted a simple example that shows the different vectors in action with source here: Brownian Motion w/ Vectors
Unfortunately the majority of my time was spent trying to figure out how vectors worked instead of creating new visuals but you will notice that there is more control over the lines that are drawn to the screen in the images that I have posted. Hopefully I will get the the visuals in my next round of experiments. Here are a few samples:

Basic Blue lines with a randomSeed
Brownian Movement get vectors

Brownian Curves

Brownian Movement Lines
The randomSeed method allows for patterns to be formed from the random numbers.
Brownian Lines.
Brownian Movement Lines
I achieved some pretty cool results by using brownian particles to create curves. I found this examples interesting because I was able to introduced the randomSeed method into the equation and began to see consistent patterns from the numbers that were randomly generated. This is a huge discovery for me because it provides a simple solution to get consistent results with the visuals that the particles are creating.
As I mentioned in a previous post, I said that I would provide some source for the ideas behind brownian movement. Here is a super basic example that provides a nice example of how simple the concept is:

// basic brownian movement

float xPos;
float yPos;

void setup()
{
size(200,200);
background(255);
fill(100);

yPos = height/2;
xPos = width/2;

}

void draw()
{

// randomly change the position on every cycle =====
yPos += random(-5,5);
xPos += random(-5,5);
ellipse(xPos, yPos, 10, 10);

}