1. I NEED TO DO THIS: http://www.youtube.com/watch?v=XrmDftTd434
The cascade that drops from cyan to red and settles at the bottom is PERFECT for when reality breaks inside the hole/cave/tunnel. Would love to have some sort of "crash" effect, like a computer just gave up. So excited!! I should play with Python more often than I have (which is, like, once...)
2. Animating a run cycle today. So far it's working, I'll pop out a playblast before I leave this afternoon. Hoping to use Clips and the Clip Editor since there running away is definitely an action that will be repeated in this project and animating it again in each sequence could get tedious.
3. Tomorrow is the Fourth of July! Being an American in Scotland, I will make my own holiday, not come into the lab, have corn on the cob and burgers and strawberries and apple pie and sit outside in the park reading my research stuff. YES.
4. Ok, so not actually a lot of things. Eh.
Showing posts with label text. Show all posts
Showing posts with label text. Show all posts
Wednesday, July 3, 2013
Wednesday, June 19, 2013
Mouth Update
I'm trying to work out the mechanics of rabbits' lips when they open and close their mouths. After looking up pictures, I can assuredly say that rabbit mouths are TERRIFYING! They seem very cute and cuddly when closed, but when opened wide the lips SPLIT right up to the nose! It looks like an unmasked Predator! Let's compare:
Cute, unassuming desert cottontail:
Then suddenly.... ROAR!
Predator for comparison:
The lip split under the nose:
"Oh my...."
Cute, unassuming desert cottontail:

Then suddenly.... ROAR!

Predator for comparison:

The lip split under the nose:

"Oh my...."

We Are Already Cyborgs -Jason Silva
We Are Already Cyborgs -Jason Silva
CLICK THAT LINK!! This is exactly my cup of tea. Jason Silva strikes again! Follow him on Twitter. Your brain will be very happy.
I have a bunch to say on this topic since it is close to the core of my written piece. I'll let this simmer in my head on the long walk home.
The mental image when one hears "cyborg" or "augmented reality" is one of robots and high science fiction. Silva does a great job of quickly and simply explaining that technology is an extension of ourselves and technology can be as basic as tool use. Anything we use to supplement the function of our own body is a technological advantage, be it a paintbrush or a haptic glove. The reality is that we are already cyborgs in a sense, just not the ones you might think.
Note to self: look up tools for conviviality. Simplest tools to enhance abilities -> tools for futuring.
CLICK THAT LINK!! This is exactly my cup of tea. Jason Silva strikes again! Follow him on Twitter. Your brain will be very happy.
I have a bunch to say on this topic since it is close to the core of my written piece. I'll let this simmer in my head on the long walk home.
The mental image when one hears "cyborg" or "augmented reality" is one of robots and high science fiction. Silva does a great job of quickly and simply explaining that technology is an extension of ourselves and technology can be as basic as tool use. Anything we use to supplement the function of our own body is a technological advantage, be it a paintbrush or a haptic glove. The reality is that we are already cyborgs in a sense, just not the ones you might think.
Note to self: look up tools for conviviality. Simplest tools to enhance abilities -> tools for futuring.
Thursday, June 13, 2013
Rough Gameplan
This is an extremely loose and basic timeline of things to do:
JUNE
Week 1: Make sure all books are procured. Animation ideation.
Week 2: Finalize animation script. Board or write beats for sequences.
Week 3: Model assets. Revise boarded/beat sequences.
Week 4: All character models done by midweek.
JULY
Week 1: ANIMATE. Bullet outline of written research.
Week 2: ANIMATE and write
Week 3: ANIMATE and write
Week 4: ANIMATE and write. Collect ambient sounds from library/database.
AUGUST
Week 1: Begin rendering. Acquire sounds for rendered scenes as completed. Write.
Week 2: All scenes rendered by midweek. Write.
Week 3: Composite. Final Mix. Edit written work.
Week 4: TURN IN
JUNE
Week 2: Finalize animation script. Board or write beats for sequences.
Week 3: Model assets. Revise boarded/beat sequences.
Week 4: All character models done by midweek.
JULY
Week 1: ANIMATE. Bullet outline of written research.
Week 2: ANIMATE and write
Week 3: ANIMATE and write
Week 4: ANIMATE and write. Collect ambient sounds from library/database.
AUGUST
Week 1: Begin rendering. Acquire sounds for rendered scenes as completed. Write.
Week 2: All scenes rendered by midweek. Write.
Week 3: Composite. Final Mix. Edit written work.
Week 4: TURN IN
Hamlet on the Holodeck
Last week I pulled a TON of books from the library. I'm slowly blocking out all light coming from my window with stacks and stacks of books. Most of them are only tangentially related so I probably wont end up reading them cover-to-cover. In my wanderings through the library I kept getting drawn toward books with a more philosophical leaning. While I'm absolutely fascinated by the brain chemistry behind our perception of reality and how we interact with technology, that would open up a whole new can of worms. In the approximate words of Bones McCoy, "I'm an animator, Jim, not a neuroscientist/philosopher." Same goes for robots. I love all debates regarding the singularity, but I don't plan on doing any heavy programming and building myself a little friend. I will, however, touch on the singularity as a potential extreme future prospect regarding what we define as augmented reality.
Anyway, back to the books. There are a few gems that I think compare/contrast nicely. Their strange synergy kind of shows the direction that my research is starting to take. They are Hamlet on the Holodeck: The Future of Narrative in Cyberspace by Janet H. Murray and Hand's End: Technology and the Limits of Nature by David Rothenberg. I'm in the middle of H.o.t.H. right now and loving it. I thought it would be a good place to launch into my project not only because of the content but because prior to becoming a programmer the author had a background in English and the humanities. This multidisciplinary approach is very relevant not only to my research, but to nearly all technology today since it is so broadly prevalent in almost every arena of modern life.
Anyway, back to the books. There are a few gems that I think compare/contrast nicely. Their strange synergy kind of shows the direction that my research is starting to take. They are Hamlet on the Holodeck: The Future of Narrative in Cyberspace by Janet H. Murray and Hand's End: Technology and the Limits of Nature by David Rothenberg. I'm in the middle of H.o.t.H. right now and loving it. I thought it would be a good place to launch into my project not only because of the content but because prior to becoming a programmer the author had a background in English and the humanities. This multidisciplinary approach is very relevant not only to my research, but to nearly all technology today since it is so broadly prevalent in almost every arena of modern life.
What is this?
Hi there! This is a rough journal of my research and production notes while working on my Master of Design in Animation at the Glasgow School of Art.
For my final project I have to conduct independent research and present a written piece on a topic of my choice and also create a related piece of animation.
This is the original research proposal from last month. It has morphed a considerable bit since then, but I think it's good to know where it started:
= = = = = = = = = = = = = = = = = = = = = = =
17 May 2013
Research Proposal
Mediating Reality: Seeing the Unseen Augmentations
When faced with the term “augmented reality” many might envision holographic displays hovering mid-air as a science fiction hero interacts with it directly using hand motions. Others might point out the yellow line indicating a “first down” on an American football broadcast or flags following Olympic swimmers in their lanes indicating their current standing and time elapsed. It is commonly understood as experiencing a physical, real-world environment possessing elements that are supplemented by computer-generated input. It does not necessarily seek to duplicate the world’s environment in a computer or create an entirely separate iteration as does virtual reality. We live in a digital age where interacting with hardware and seeing results appear on a screen are so commonplace that it is nigh impossible to avoid in day-to-day life.
Because of the way we now inherently accept the digital, where does this place the boundaries for what is augmented from our daily experienced world? Are we even aware of this acceptance? Car commercials feature what seem to be actual vehicles moving through country roads or cities at night, but really they are more often than not computer-generated imagery meant to enhance our perception of the actual car. Have we been deceived, or have we merely accepted that what we are shown must be real and created for ourselves a “real” perceived world? The same concept can be applied to cinema. When we watch a movie, we are willfully suspending disbelief and entering the world that is shown to us on the screen. Advances in the technology of filmmaking have greatly increased the scope of worlds that we are able to enter. What started with manually compositing images with mattes then became green screen and chroma-keying. Really, though, is there so much different between Who Framed Roger Rabbit? and Avatar insofar that each takes our interaction and augments it not once through the mere watching of a motion picture, but twice through creating another layer, a world within that cinematic world, that we are asked to believe is whole and experience it as such? So much today is processed with digital cameras that the claim to create a commercial film with “no CGI” is laughable. I love the tangibility and character of traditional film as much as the next cineaste, but today's industrialized approach to filmmaking heavily tends toward the computerized, though I will not call it automated. There is definitely still an art to digital fabrication, and visual effects artists create explosions, the aforementioned cars, and fantastical creatures that we are asked to accept as physical and real, and, more often than not, we do. When a character exists as a physical puppet, digital double, voice actor, and countless animators, though, in what space does the character truly exist and become, as we experience her, real? Most effects are invisible to us since they are designed to be so; they are not meant to distract from the world we are shown, but to augment it.
I posit that the scope of what we describe as augmented reality is, in actuality, far broader than the common description. In a way, it is much more basic, and strangely enough the increased technological literacy of our world has led me to realize that even rudimentary tools have augmented our reality a very long time. I want to examine the blurry intersection between what we will accept as reality and those experiences that we hold at arms length and explain away as bits of code flying through an imagined space. At what point does a scene created in Maya stop being a simulation of 3D space and start being an actual environment that, due to our physical fleshy limitations, we must interact with through a computer? Haptics let us interact with digitally created objects, but at what point does the suggestion of an object become a “real”?
This research will be largely textual. I will create an accompanying animated short, but it will complement the written research rather than be a direct application of the research’s outcome, as that would require technology and skills to which I just do not have access. (Hopefully, though, my research could lead to eventual later application of principles learned.) The animated piece will allude to the boundary between what is perceived and experienced and what is real, though that is really all dependent upon the viewer. I would like to combine numerous media for animation, like projecting 2D image sequences onto 3D surfaces in Maya, as it adds further dimensions of augmentation and composite pieces of reality that we as the audience must synthesize in our brains into the experienced world.
As far as text goes, the main question I want to answer is because of the way we now accept the digital, is all digital cinematic interaction, then, augmented reality? Or has the digital merely become an extension of ourselves in a sort of transhumanist ‘cyborg’ kind of way? In a twist of etymological irony, we call something “digital” because it is numerically based coded information, and numbers are “digits” because we count them on our own digits, our fingers. So really, we’re just coming full circle and re-accepting our artifice back into ourselves. Additionally, what is a brain but a switch that turns muscles on or off in rapid succession to produce movement, electrical impulses, and other chemical responses? So, then, we are now re-learning how to interface with computers, our out-of-body brains, which turn on or off binary switches telling a computer what to execute next. I will supplement my more artistic take of the cinematic world of augmented reality by looking at the work of Steve Mann, applications of Natural User Interface, and work published at the MIT Media Lab in Fluid Interfaces and the Center for Future Storytelling. With luck, I will also be permitted to shadow the research being conducted upstairs at the DDS.
This is the original research proposal from last month. It has morphed a considerable bit since then, but I think it's good to know where it started:
= = = = = = = = = = = = = = = = = = = = = = =
17 May 2013
Research Proposal
Mediating Reality: Seeing the Unseen Augmentations
When faced with the term “augmented reality” many might envision holographic displays hovering mid-air as a science fiction hero interacts with it directly using hand motions. Others might point out the yellow line indicating a “first down” on an American football broadcast or flags following Olympic swimmers in their lanes indicating their current standing and time elapsed. It is commonly understood as experiencing a physical, real-world environment possessing elements that are supplemented by computer-generated input. It does not necessarily seek to duplicate the world’s environment in a computer or create an entirely separate iteration as does virtual reality. We live in a digital age where interacting with hardware and seeing results appear on a screen are so commonplace that it is nigh impossible to avoid in day-to-day life.
Because of the way we now inherently accept the digital, where does this place the boundaries for what is augmented from our daily experienced world? Are we even aware of this acceptance? Car commercials feature what seem to be actual vehicles moving through country roads or cities at night, but really they are more often than not computer-generated imagery meant to enhance our perception of the actual car. Have we been deceived, or have we merely accepted that what we are shown must be real and created for ourselves a “real” perceived world? The same concept can be applied to cinema. When we watch a movie, we are willfully suspending disbelief and entering the world that is shown to us on the screen. Advances in the technology of filmmaking have greatly increased the scope of worlds that we are able to enter. What started with manually compositing images with mattes then became green screen and chroma-keying. Really, though, is there so much different between Who Framed Roger Rabbit? and Avatar insofar that each takes our interaction and augments it not once through the mere watching of a motion picture, but twice through creating another layer, a world within that cinematic world, that we are asked to believe is whole and experience it as such? So much today is processed with digital cameras that the claim to create a commercial film with “no CGI” is laughable. I love the tangibility and character of traditional film as much as the next cineaste, but today's industrialized approach to filmmaking heavily tends toward the computerized, though I will not call it automated. There is definitely still an art to digital fabrication, and visual effects artists create explosions, the aforementioned cars, and fantastical creatures that we are asked to accept as physical and real, and, more often than not, we do. When a character exists as a physical puppet, digital double, voice actor, and countless animators, though, in what space does the character truly exist and become, as we experience her, real? Most effects are invisible to us since they are designed to be so; they are not meant to distract from the world we are shown, but to augment it.
I posit that the scope of what we describe as augmented reality is, in actuality, far broader than the common description. In a way, it is much more basic, and strangely enough the increased technological literacy of our world has led me to realize that even rudimentary tools have augmented our reality a very long time. I want to examine the blurry intersection between what we will accept as reality and those experiences that we hold at arms length and explain away as bits of code flying through an imagined space. At what point does a scene created in Maya stop being a simulation of 3D space and start being an actual environment that, due to our physical fleshy limitations, we must interact with through a computer? Haptics let us interact with digitally created objects, but at what point does the suggestion of an object become a “real”?
This research will be largely textual. I will create an accompanying animated short, but it will complement the written research rather than be a direct application of the research’s outcome, as that would require technology and skills to which I just do not have access. (Hopefully, though, my research could lead to eventual later application of principles learned.) The animated piece will allude to the boundary between what is perceived and experienced and what is real, though that is really all dependent upon the viewer. I would like to combine numerous media for animation, like projecting 2D image sequences onto 3D surfaces in Maya, as it adds further dimensions of augmentation and composite pieces of reality that we as the audience must synthesize in our brains into the experienced world.
As far as text goes, the main question I want to answer is because of the way we now accept the digital, is all digital cinematic interaction, then, augmented reality? Or has the digital merely become an extension of ourselves in a sort of transhumanist ‘cyborg’ kind of way? In a twist of etymological irony, we call something “digital” because it is numerically based coded information, and numbers are “digits” because we count them on our own digits, our fingers. So really, we’re just coming full circle and re-accepting our artifice back into ourselves. Additionally, what is a brain but a switch that turns muscles on or off in rapid succession to produce movement, electrical impulses, and other chemical responses? So, then, we are now re-learning how to interface with computers, our out-of-body brains, which turn on or off binary switches telling a computer what to execute next. I will supplement my more artistic take of the cinematic world of augmented reality by looking at the work of Steve Mann, applications of Natural User Interface, and work published at the MIT Media Lab in Fluid Interfaces and the Center for Future Storytelling. With luck, I will also be permitted to shadow the research being conducted upstairs at the DDS.
Subscribe to:
Posts (Atom)