Thursday, June 27, 2013

What a day

Rainy walk over to the lab only to rediscover that there's an all-student meeting this afternoon about planning our graduate degree show and it's over in one of the other buildings. Ugh. Not much time to work over here today so I'm going to block out some of the backgrounds and environmental stuff. Here are some shots from yesterday. I was trying out a pose that the rabbit will end up in pretty often to make sure I had the skin weights alright, especially that his underside didn't fold into itself.

Wednesday, June 26, 2013

Day off & UVs

I took a little break yesterday and didn't come in to the lab. My eyes and brain were fried from solving Monday's problems and I just couldn't bring myself to sit down and deal with unwrapping UVs. Instead, I stayed at home, read some of the research materials for the written component of this project, and saw a movie. Let's call it researching camera moves and angles...

Anyway, this morning I unwrapped and messed around with the rabbit UVs. Think they're all right, I'll throw some color on it this weekend. The checkerboard doesn't seem too stretched out. We'll see.

Did some fine-tuning with the skin weights. Getting there...

Painting weights:

Testing some checkerboard:

UVs:

Monday, June 24, 2013

spline conundrum -> binding skin

We had some technical difficulties this morning with the spline spine that started a little domino-effect down the hierarchy of controllers and manipulators. I didn't get around to actually binding the skin and starting to paint weights until this afternoon. Here are a few fun poses. Shaping up well!

Bunched up:


Stretched out:


Run away!

Friday, June 21, 2013

rig-a-rig-a-rabbit

To the tune of Ring Around the Rosie:

Rig a random rabbit;
make naming joints a habit;
if you didn't orient,
they all fall down!



(alternate ending: if you didn't orient... well, you're f*#%d!)





Legs are fine. Arms are fine (fingers to be dealt with later, need be). Tail is fine. Neck is... hopefully fine. Spine is ANNOYING. He's too bendy for his own good. I'm just not super comfortable with spline IK, but it should be alright as long as I deal with the clusters correctly...

I think I'm just going to give the ears a few controllers. Don't want to mess with more splines and clusters, so it'll just be rotating the joints. Eh, should be fine. I wasn't totally sure what to do with the neck, since it's nearly as flexible as the back and needs to be able to puff out when crouched down/bunched up, but also stretch forward and out when running quickly. I added an RPsolver IK as you would ordinarily use in any old character from the neck to the head joint, but I left one more joint between those two so it's more flexible and has a little reactive bend in it. It's all under the head controller circle. Seems OK so far. We'll see...

Thursday, June 20, 2013

lagomorphic cleft palate surgery x 3

This morning was rough. I modelled the mouth and ears and tweaked the head a little bit overall. It's a little narrower. Domestic rabbits have wider heads, but desert cottontails are a bit "sportier", though not nearly as lanky as hares. When it came to the mouth I kept adding detail to one side with the knowledge that I would mirror the geometry as a final step. This went fantastically, until I realized that the points on the inside and underside of the lip were so close together that they puckered into one star vertex with a million edges. I ended up doing rabbit "cleft palate surgery" around three times. Finally, I just decided to forget about it, continue modelling, and move on to the ears. Ears then finished, I came back at the end to mirror it all one last time. In the interim I kind of forgot what the inner lip/mouth/nose geometry had originally looked like, so he looks a little now different than intended. It's like rhinoplasty that came out a little too subtle. Something has changed, I just don't know what. Oh well. Here's blockhead:


Here's a smoother version from the front:


And the whole blocky body. Still need to attach the neck. Ho-hum.

EDIT: Connected head to body. There's a seam just at the base of the skull that REALLY bugs me but once I texture him it probably won't show up. I think I've just stared at it for too long. I thought the wireframe from the side looked pretty:

Wednesday, June 19, 2013

Mouth Update

I'm trying to work out the mechanics of rabbits' lips when they open and close their mouths. After looking up pictures, I can assuredly say that rabbit mouths are TERRIFYING! They seem very cute and cuddly when closed, but when opened wide the lips SPLIT right up to the nose! It looks like an unmasked Predator! Let's compare:

Cute, unassuming desert cottontail:


Then suddenly.... ROAR!

Predator for comparison:

The lip split under the nose:


"Oh my...."

We Are Already Cyborgs -Jason Silva

We Are Already Cyborgs -Jason Silva

CLICK THAT LINK!! This is exactly my cup of tea. Jason Silva strikes again! Follow him on Twitter. Your brain will be very happy.

I have a bunch to say on this topic since it is close to the core of my written piece. I'll let this simmer in my head on the long walk home.

The mental image when one hears "cyborg" or "augmented reality" is one of robots and high science fiction. Silva does a great job of quickly and simply explaining that technology is an extension of ourselves and technology can be as basic as tool use. Anything we use to supplement the function of our own body is a technological advantage, be it a paintbrush or a haptic glove. The reality is that we are already cyborgs in a sense, just not the ones you might think.

Note to self: look up tools for conviviality. Simplest tools to enhance abilities -> tools for futuring.

Digital Digits & Blockhead the Bunny

First thing I did when I came in this morning was copy the whole body, delete the paws, and completely remodel them with actual fingers. MUCH better, I think. The back feet stayed the way they were as one solid piece. I'm ok with that.

He kind of looks cool, actually, down at the unsmoothed poly level. I like his new little blocky toes.

Blocked out (hah) his head. I'm not super confident in modelling eyes and eyelids, but after wresting with it for a little over an hour I think it's passable. I was super careful to avoid vertices with more than 4 edges. No weird pointy or puckered bits allowed! I want to sketch up some mouth ideas and specific shapes/poses before I model the nose/mouth muzzle area. All that's left, then, are some big ears and connecting the head and body.

Tuesday, June 18, 2013

Lucky Rabbit Feet

Spent yesterday in the lab starting to model the rabbit. In my head I saw him a little more stylized and angular, but these screengrabs show the smoothed version. We'll see how it progresses. He's still currently headless, but I'm really happy with the pelvis-to-leg geometry. Smooth, simple, and should deform really nicely. The front legs were a little messier, I'll probably tweak them a little more. It was strange modeling the rabbit because his range of motion is so extreme. He's very bouncy and can bunch up into a very compact ball, or stretch out quite long while running. The back needs to be very flexible and the back legs need to be able to fold up almost completely without weird geometry folding in the skin, but also stretch out behind him without being whittled away into skinny nothingness.



I love the back feet, but the front paws give me pause (haha). Part of me wants to remodel the front paws with actual little fingers. That might make for a nice detail in a few shots. I wouldn't animate each separately, but probably add a slider between open/splayed and closed/together. Wouldn't be too difficult...

I boarded a couple sequences over the weekend, too. Worked out a few things for the snake character regarding what is possible/impossible for it to do in the weird surreality of the tunnels.

Friday, June 14, 2013

bouncing along

I don't have access to a scanner when I'm not at the lab or the library, but I held some sketches up to my laptop's built-in camera. These are from earlier this week when I was looking into which kind of rabbit or hare to use as reference. Settled on the desert cottontail. They don't actually live in warrens or holes as deep as the one in my story, but let's just imagine the snake warps reality a little bit or makes the ground collapse into a larger system of tunnels.



The legs on the bottom two are a little longer than I'd like. I just wanted to explore the bone structure a little bit. I also found a good link to a locomotion study to help me figure out how to make him move: http://seanlowse.blogspot.co.uk/2012/02/rabbit-locomotion-study.html

Thursday, June 13, 2013

Rough Gameplan

This is an extremely loose and basic timeline of things to do:

JUNE
Week 1: Make sure all books are procured. Animation ideation.
Week 2: Finalize animation script. Board or write beats for sequences.
Week 3: Model assets. Revise boarded/beat sequences.
Week 4: All character models done by midweek.

JULY
Week 1: ANIMATE. Bullet outline of written research.
Week 2: ANIMATE and write
Week 3: ANIMATE and write
Week 4: ANIMATE and write. Collect ambient sounds from library/database.


AUGUST
Week 1: Begin rendering. Acquire sounds for rendered scenes as completed. Write.
Week 2: All scenes rendered by midweek. Write.
Week 3: Composite. Final Mix. Edit written work.
Week 4: TURN IN

Hamlet on the Holodeck

Last week I pulled a TON of books from the library. I'm slowly blocking out all light coming from my window with stacks and stacks of books. Most of them are only tangentially related so I probably wont end up reading them cover-to-cover. In my wanderings through the library I kept getting drawn toward books with a more philosophical leaning. While I'm absolutely fascinated by the brain chemistry behind our perception of reality and how we interact with technology, that would open up a whole new can of worms. In the approximate words of Bones McCoy, "I'm an animator, Jim, not a neuroscientist/philosopher." Same goes for robots. I love all debates regarding the singularity, but I don't plan on doing any heavy programming and building myself a little friend. I will, however, touch on the singularity as a potential extreme future prospect regarding what we define as augmented reality.

Anyway, back to the books. There are a few gems that I think compare/contrast nicely. Their strange synergy kind of shows the direction that my research is starting to take. They are Hamlet on the Holodeck: The Future of Narrative in Cyberspace by Janet H. Murray and Hand's End: Technology and the Limits of Nature by David Rothenberg. I'm in the middle of H.o.t.H. right now and loving it. I thought it would be a good place to launch into my project not only because of the content but because prior to becoming a programmer the author had a background in English and the humanities. This multidisciplinary approach is very relevant not only to my research, but to nearly all technology today since it is so broadly prevalent in almost every arena of modern life.

What is this?

Hi there! This is a rough journal of my research and production notes while working on my Master of Design in Animation at the Glasgow School of Art. For my final project I have to conduct independent research and present a written piece on a topic of my choice and also create a related piece of animation.

This is the original research proposal from last month. It has morphed a considerable bit since then, but I think it's good to know where it started:

= = = = = = = = = = = = = = = = = = = = = = =

17 May 2013

Research Proposal
Mediating Reality: Seeing the Unseen Augmentations

When faced with the term “augmented reality” many might envision holographic displays hovering mid-air as a science fiction hero interacts with it directly using hand motions. Others might point out the yellow line indicating a “first down” on an American football broadcast or flags following Olympic swimmers in their lanes indicating their current standing and time elapsed. It is commonly understood as experiencing a physical, real-world environment possessing elements that are supplemented by computer-generated input. It does not necessarily seek to duplicate the world’s environment in a computer or create an entirely separate iteration as does virtual reality. We live in a digital age where interacting with hardware and seeing results appear on a screen are so commonplace that it is nigh impossible to avoid in day-to-day life.

Because of the way we now inherently accept the digital, where does this place the boundaries for what is augmented from our daily experienced world? Are we even aware of this acceptance? Car commercials feature what seem to be actual vehicles moving through country roads or cities at night, but really they are more often than not computer-generated imagery meant to enhance our perception of the actual car. Have we been deceived, or have we merely accepted that what we are shown must be real and created for ourselves a “real” perceived world? The same concept can be applied to cinema. When we watch a movie, we are willfully suspending disbelief and entering the world that is shown to us on the screen. Advances in the technology of filmmaking have greatly increased the scope of worlds that we are able to enter. What started with manually compositing images with mattes then became green screen and chroma-keying. Really, though, is there so much different between Who Framed Roger Rabbit? and Avatar insofar that each takes our interaction and augments it not once through the mere watching of a motion picture, but twice through creating another layer, a world within that cinematic world, that we are asked to believe is whole and experience it as such? So much today is processed with digital cameras that the claim to create a commercial film with “no CGI” is laughable. I love the tangibility and character of traditional film as much as the next cineaste, but today's industrialized approach to filmmaking heavily tends toward the computerized, though I will not call it automated. There is definitely still an art to digital fabrication, and visual effects artists create explosions, the aforementioned cars, and fantastical creatures that we are asked to accept as physical and real, and, more often than not, we do. When a character exists as a physical puppet, digital double, voice actor, and countless animators, though, in what space does the character truly exist and become, as we experience her, real? Most effects are invisible to us since they are designed to be so; they are not meant to distract from the world we are shown, but to augment it.

I posit that the scope of what we describe as augmented reality is, in actuality, far broader than the common description. In a way, it is much more basic, and strangely enough the increased technological literacy of our world has led me to realize that even rudimentary tools have augmented our reality a very long time. I want to examine the blurry intersection between what we will accept as reality and those experiences that we hold at arms length and explain away as bits of code flying through an imagined space. At what point does a scene created in Maya stop being a simulation of 3D space and start being an actual environment that, due to our physical fleshy limitations, we must interact with through a computer? Haptics let us interact with digitally created objects, but at what point does the suggestion of an object become a “real”?

This research will be largely textual. I will create an accompanying animated short, but it will complement the written research rather than be a direct application of the research’s outcome, as that would require technology and skills to which I just do not have access. (Hopefully, though, my research could lead to eventual later application of principles learned.) The animated piece will allude to the boundary between what is perceived and experienced and what is real, though that is really all dependent upon the viewer. I would like to combine numerous media for animation, like projecting 2D image sequences onto 3D surfaces in Maya, as it adds further dimensions of augmentation and composite pieces of reality that we as the audience must synthesize in our brains into the experienced world.

As far as text goes, the main question I want to answer is because of the way we now accept the digital, is all digital cinematic interaction, then, augmented reality? Or has the digital merely become an extension of ourselves in a sort of transhumanist ‘cyborg’ kind of way? In a twist of etymological irony, we call something “digital” because it is numerically based coded information, and numbers are “digits” because we count them on our own digits, our fingers. So really, we’re just coming full circle and re-accepting our artifice back into ourselves. Additionally, what is a brain but a switch that turns muscles on or off in rapid succession to produce movement, electrical impulses, and other chemical responses? So, then, we are now re-learning how to interface with computers, our out-of-body brains, which turn on or off binary switches telling a computer what to execute next. I will supplement my more artistic take of the cinematic world of augmented reality by looking at the work of Steve Mann, applications of Natural User Interface, and work published at the MIT Media Lab in Fluid Interfaces and the Center for Future Storytelling. With luck, I will also be permitted to shadow the research being conducted upstairs at the DDS.