Friday, January 29, 2016
Slightly More Butchery
I added a new sign, gave the goose a couple curious glances, and added an awning.
Lacan
It seems like everyone is all over the topic of the "verbal nature of subjectivity", but I have not seen anyone really grab the mixed media implications, or my Transformative Performative Function™...
I am still trying to wrap my head around Lacan's work. I don't really get most of it yet, and a lot of his writing is Freudian psychoanalytical text, but he arrives at some relevant concepts about communication and interpretation after some conjecture about the cognitive dissonance babies feel when they look in the mirror and realize they are not one with their mother's breasts.
fantasmatization - a primal architecture of representations, drives and passions in and through which the subject creates a world for itself.
I am still trying to wrap my head around Lacan's work. I don't really get most of it yet, and a lot of his writing is Freudian psychoanalytical text, but he arrives at some relevant concepts about communication and interpretation after some conjecture about the cognitive dissonance babies feel when they look in the mirror and realize they are not one with their mother's breasts.
fantasmatization - a primal architecture of representations, drives and passions in and through which the subject creates a world for itself.
Performative Function Through Swapping Dialects
I guess I didn't really have a form unique to mixed media in the "Of Geese and Finance" shot, but I think I have one in the "Butcher Shop Interior" and "Panhandler" shots.
I am not sure what kind of function the "Butcher Shop Interior" is. It sort of changes from implicit to explicit as the text transforms into characters. All of my cross over tropes sort of go from constative to performative by virtue of swapping dialects between literary and cinematography through interactivity. In a sense they were never really performative functions, because they were never purely constave in the other platform. But that leads back to the problem that you cannot make a statement to another without eliciting some sort of reaction (or it is not really communication). In that light, the crossover performative functions are more perfect than a performative function in a single platform because they do not exist as statements at all in the other medium... But that might make them less than utterances.
I don't think a constative utterance can exist, but performatives are a gradation from implicit to explicit, and time based media can go from not existing -> to implicit -> to explicit as the message is revealed to the audience.
I am not sure what kind of function the "Butcher Shop Interior" is. It sort of changes from implicit to explicit as the text transforms into characters. All of my cross over tropes sort of go from constative to performative by virtue of swapping dialects between literary and cinematography through interactivity. In a sense they were never really performative functions, because they were never purely constave in the other platform. But that leads back to the problem that you cannot make a statement to another without eliciting some sort of reaction (or it is not really communication). In that light, the crossover performative functions are more perfect than a performative function in a single platform because they do not exist as statements at all in the other medium... But that might make them less than utterances.
I don't think a constative utterance can exist, but performatives are a gradation from implicit to explicit, and time based media can go from not existing -> to implicit -> to explicit as the message is revealed to the audience.
Thursday, January 28, 2016
Chompskybot
http://rubberducky.org/cgi-bin/chomsky.pl
Possibly brilliant... Possibly terrifying, because I am starting to sound like this.
Possibly brilliant... Possibly terrifying, because I am starting to sound like this.
Time Based Performative Function or The Transformative Performative Function©
It seems a constative could be a transforming performative with the mood primitive in any time based format...
A dramatic pause can be a performative function that happens with time if the perception of the statement changes from informative to performative while the audience is watching; eliciting a psychological reaction.
For example:
If you start the statement masquerading as an informative*, but add a mood primitive, that is not initially apparent, the temporal offset adds punctuation.
If you wanted an audience to identify with a protagonist. Creating a scene where the antagonist was smiling and saying "there is the door" in the tone of an informative statement, and then holding eye contact too long while the smile fades into an angry glare. The protagonist's lag interpreting the situation (because the information was intentionally misleading) would suggest the recipient was slow to understand. If successful, the audience would also experience the confusion, identifying with the protagonist.
This seems like the kind of thing that is used in cinematography, but I cannot think of an example right now.
*If you accept that one could make an informative statement with no anticipated evocation. It could be argued as an implicit performative; intended to prevent someone from making actions, or to assert yourself as helpful to get others to treat you in a certain manner.
A dramatic pause can be a performative function that happens with time if the perception of the statement changes from informative to performative while the audience is watching; eliciting a psychological reaction.
For example:
If you start the statement masquerading as an informative*, but add a mood primitive, that is not initially apparent, the temporal offset adds punctuation.
If you wanted an audience to identify with a protagonist. Creating a scene where the antagonist was smiling and saying "there is the door" in the tone of an informative statement, and then holding eye contact too long while the smile fades into an angry glare. The protagonist's lag interpreting the situation (because the information was intentionally misleading) would suggest the recipient was slow to understand. If successful, the audience would also experience the confusion, identifying with the protagonist.
This seems like the kind of thing that is used in cinematography, but I cannot think of an example right now.
*If you accept that one could make an informative statement with no anticipated evocation. It could be argued as an implicit performative; intended to prevent someone from making actions, or to assert yourself as helpful to get others to treat you in a certain manner.
Wednesday, January 27, 2016
Lacan
This man will be one of my main references. His diagrams make sense to me. I am just worried he has already completely covered what I am researching.
"Hitchcock’s ‘Vertigo’ as paradigm Vertigo’s ‘perfect fit’ allows a retro-fitting of the L-scheme to correspond to the boundary language diagram (BoLaGram). Just as prying open the L-scheme of ‘The Sandman’ led to an articulation of the two themes of contractual exchange and optics, Vertigo’s four elements ‘open up’ the relationships that pivot around the jewel. The jewel was a fake copied from a portrait of the deceased Hispanic beauty, Carlotta Valdez. Scotty is lured into Elster’s murder plot, which involves hiring Judy to impersonate his wife and appear to be possessed by Carlotta’s spirit. Scotty thinks he has rescued her from madness, but she lures him to a Colonial monastery, where Elster has concealed himself and his real wife in a tower. Just as Judy climbs to the top and hides, Scotty ‘witnesses’ the fall of the real wife and believes she has committed suicide on account of her madness. Recovering from the trauma, he finds a shop girl who resembles Madeleine and pursues her, persuading her to be remade in the likeness of Madeleine. In her apartment he discovers the jewel, the Deleuzian ‘demark’, which is Real precisely because it is a fake, just as Judy is Real precisely because SHE is fake! The revised L-scheme shows how the contractual relationship between Scotty and Elster (the symbolic relationship), afforded a Ø-projection of Madeleine (who ‘really was’ Judy) that created an anamorphic line of action in the fi lm. Jewels, cigarette lighters, rings, keys and other small precious objects work well as ‘object-causes of desire’ because their value is ‘inestimable’ and beyond their function and materiality."
...wow...
I am going to have to watch Vertigo again.
More:
http://art3idea.psu.edu/locus/diagrams/L-Scheme_Lacan.pdf
"Hitchcock’s ‘Vertigo’ as paradigm Vertigo’s ‘perfect fit’ allows a retro-fitting of the L-scheme to correspond to the boundary language diagram (BoLaGram). Just as prying open the L-scheme of ‘The Sandman’ led to an articulation of the two themes of contractual exchange and optics, Vertigo’s four elements ‘open up’ the relationships that pivot around the jewel. The jewel was a fake copied from a portrait of the deceased Hispanic beauty, Carlotta Valdez. Scotty is lured into Elster’s murder plot, which involves hiring Judy to impersonate his wife and appear to be possessed by Carlotta’s spirit. Scotty thinks he has rescued her from madness, but she lures him to a Colonial monastery, where Elster has concealed himself and his real wife in a tower. Just as Judy climbs to the top and hides, Scotty ‘witnesses’ the fall of the real wife and believes she has committed suicide on account of her madness. Recovering from the trauma, he finds a shop girl who resembles Madeleine and pursues her, persuading her to be remade in the likeness of Madeleine. In her apartment he discovers the jewel, the Deleuzian ‘demark’, which is Real precisely because it is a fake, just as Judy is Real precisely because SHE is fake! The revised L-scheme shows how the contractual relationship between Scotty and Elster (the symbolic relationship), afforded a Ø-projection of Madeleine (who ‘really was’ Judy) that created an anamorphic line of action in the fi lm. Jewels, cigarette lighters, rings, keys and other small precious objects work well as ‘object-causes of desire’ because their value is ‘inestimable’ and beyond their function and materiality."
...wow...
I am going to have to watch Vertigo again.
More:
http://art3idea.psu.edu/locus/diagrams/L-Scheme_Lacan.pdf
Render Butcher Shop Awning
I am getting an "Error: line 1: Cannot find procedure "SubmitJobToDeadline"" from Maya, but here is a still from the awning update.
Tuesday, January 26, 2016
Drug Store to Butcher Shop
Monday, January 25, 2016
Sunday, January 24, 2016
More Portable 3D Scanning
I really want to do a dolly zoom for the interior. That type of shot requires a lot of prep, and encompasses a lot of background because of the angle. You can't really fake it with images, so I was hoping I could at least partially get the environment through scanning.
ReconstructMe does a passable job, butit does not export a color file in a format Maya recognizes. There was a converter made by Stanford, but it only works up to Maya 8.5 (I tried it with 2014 and it failed). Meshlab converts to dae (among other formats Maya should read), but every attempt to import to Maya with color has failed...
My desktop runs the software, but my tablet is not up to the task. I have a power converter, but it only supplies 400 to 600W. I might be able to rig something, but it would not be very portable. If I am going to shell out $1000 for a new laptop, I would like to get something that I could use with the new Kinect, but I haven't seen the new one work at all yet.
ReconstructMe does a passable job, butit does not export a color file in a format Maya recognizes. There was a converter made by Stanford, but it only works up to Maya 8.5 (I tried it with 2014 and it failed). Meshlab converts to dae (among other formats Maya should read), but every attempt to import to Maya with color has failed...
My desktop runs the software, but my tablet is not up to the task. I have a power converter, but it only supplies 400 to 600W. I might be able to rig something, but it would not be very portable. If I am going to shell out $1000 for a new laptop, I would like to get something that I could use with the new Kinect, but I haven't seen the new one work at all yet.
MoCap for Text
My focus right now is on blending mediums to make a from unique to the hybrid medium. I am taking Motion Capture this semester, so I could make acting text.
This is the Scene with the panhandler. I have made some adjustments to the environment so that I could walk through it as the test, behind the goose, with the street footage projected on an object for reference. This version is not animated, but I will make one with a tiff sequence for the capture session.
Trying to Use a Dolly Zoom on Text Alone
Thought I would try a dolly zoom with text for a mixed media project. Not a resounding success. A dolly zoom really needs an environment to be successful.
Saturday, January 23, 2016
Depth map from the Lightfield Camera
The time I spent trying to match the model to the live footage last semester could have been saved if I had a portable 3D scanner. The Kinect generates a depth map, but it is too noiny, and you cann't catch more than one perspective. When used with Reconstruct Me, you can generate a model in real time that is usable as a shadow mask object. But the processing power of my tablet causes the frame rate to drop and the model generation to abort.
I thought I would try a lightfield camera because the depth map is generated from the color images, and I thought it would give me a better registration. But the Lytro camera's depth map was much worse than the Kinect.
Lytro Depth Map
Image
Depth Map
Overlay
The depth is created by a difference comparator function between the images captured across the array. This eliminates the IR beam and captured the depth map and image through the same lens, but the reliance on different pixel color and brightness to generate depth fails to register a depth if the surface is uniform. The software does some adjustments, and with a bit of editing the depth map can be used to create a virtual focus and aperture, but the depth resolution is low and fidelity is low.
Conclusion:
The lightfield camera's depth map is unusable for the creation of 3D models.
Friday, January 22, 2016
Updated Research Statement, Topic, and Questions
Research Statement
The research topic I have been working on is centered on the
composition of a story that blends cinematic animation, interactive media, and
literature in a complementary way. I am exploring the topic through the
creation of a user
driven animation that contains some of the poignant elements that have been
filtered form children's stories over the last few generations. The intended form would adapt contemporary
devices from cinematography and interactive media, mixing text, animated
characters, and live action through camera movement, turning pages, and a touch
screen in a way that I am trying to compile seamlessly and coherently. I hope
that the final product is an emotionally complex artifact that children and
parents will find meaningful and my research will yield revelations into the
creation of mixed platform narration.
Research Topic: Construction of a blended media children’s
story with complicated emotional impact
Narrowed research topic:
·
Relationship
between phrases, implication, and interpretation across cinematography,
literature, and interactive media
·
Adaptation
of a book to an interactive animation
·
The
generation of communicative elements unique to hybrid media
More specific topic: How can I invoke the emotional impact of
a somber bedtime story in a contemporary form without corrupting the intimate
quality of a book?
Research Questions:
1. How can analysis of overarching
communicative structure aid the fusion of blended media forms?
2. What are the constituent phrases of
cinematography, literature, and interactive media?
3. How do the dialects of
cinematography, literature, and interactive media function (constructively and
destructively) when combined?
Mobile Scanning
I am working on putting together a mobile 3D scanner to make a quick 3D model of an environment I am shooting for shadows and reflections. I have the 3D construction working on my PC, and the 3D scanner working on my tablet, but my tablet drops frames which causes the 3D construction software to abort.
I thought I would try to use the depth map and photo to make a 3D model.
I spend several hours trying to do this tutorial in Maya 2016, and the changes are so drastic, I couldn't figure it out. The input output controlls
I found a tutorial to do it with Photoshop, and it worked, but this was the result. There is a descrepency between the focal length of the camera lens and the depth map. I assume the Kenect SDK software compensates.
This is what the live capture looks like.
Long shot point cloud across the lab (about 25 feet)
Long shot point cloud across the lab from above (about 25 feet)
Long shot with texture across the lab from above (about 25 feet)
Long shot with texture across the lab from above (about 25 feet)
Close up of the chairs
Long shot point cloud across the lab from above (about 25 feet)
Close Point cloud
Close up color
The results are not great. The software that creates a model from the Kinect clearly does a lot of processing to eliminate the noise. My tablet is not up to the task, and classroom services no longer loans laptops. I have a lightfield camera that I am going to have to create the model from photographs and the rough depth map models as a reference.
Wednesday, January 20, 2016
Butcher Shop Interior
This is my first test texture for the Butcher shop interior. It is a Boolean of the word See mapped on to a scan of my face (with no eyes).
"See what people do"
Tuesday, January 19, 2016
Unity
I got unity running at home, and I have started the ball game. I got a bit of a problem getting the cubes to disappear, so I am going to head in early to try to get it worked out before class.
Monday, January 18, 2016
The Big Idea (as of 2 am Tuesday 01/19/16)
The Big Idea (subject to change without notice)
This semester I am focused on creating a form that blends cinematic animation, interactive media, and literature in a complimentary way. The shot that I am working on now is the interior of the building Tuba-Goose entered last semester. I am changing the location to a butcher shop because it adds to the dramatic effect, and horror films have a lot of paradigmatic camera and lighting techniques that would be interesting to apply to another medium. I thought it would be complimentary mix of media to make the interior all narrative text (people , shelves, everything except the goose would be extruded polygonal text), use motion capture to animate the people, and use a dolly zoom for a shot where the goose sees the butcher doing what butchers do.
I am not sure how I will texture everything or how I will organize the words so they function as characters and are readable as text, but I think motion capture will add an inherently human quality to the movement that should give me more freedom to deviate from anthropomorphic shapes while relaying the intended actions. I am hoping that when I have the movement for the scene, I will be able to create a model based on the text and the key frames that becomes reveals itself to the reader in a way that alludes to the internal dialogue of the goose when he realizes what is going on in the butcher shop (I want the audience to be thinking "Is that a person chopping off the head of... YES IT IS!" in parallel to the character's realization). This shot does not do much to exploit interactive media, but I am worried compounding too many distracting elements (e.g. putting a tilt interaction, prompting an expand gesture on the touch screen, or something of that nature) would drown the humble, quiet nature of a book.
Sunday, January 17, 2016
Unity Game Design Breakdown
My digital popup book is going to run a tablet through the Unity game engine.
Current Animatic
Execution
Progression of the animation is user driven through a swiping motion. Indication of when and how to swipe is given through "goose tracks"
Most of the animation is CGI mixed with live action, and I do not expect most tablets to be able to handle the level of detail and texture needed to render my model. So I am pre-rendering the footage and the swiping motion sets the speed and scrubs through the footage rather than moving characters in a 3D environment.
Tuesday, January 12, 2016
Junk
I think it is dactylic tetrameter.
Dactil (Stressed + Unstressed + Unstressed) Tetrameter (four groups)
But it ends in an iamb (a short syllable followed by a long syllable)
It might be anapestic tetrameter.
Wednesday, January 6, 2016
Prompting a Dialogue in 3D!!!
The most believable responses are reactions from real people to unscripted scenarios, not actors. That is why I am trying to get authentic reactions from people, and edit the goose into the footage. As an extension, I have been working on a teleprompter for unwitting actors.
I want the audience/viewers reaction to be surprised and exclamatory; something like "Is that a goose with a tuba?!", and not "what is this for" or "How did you make it 3D?". So I am trying to absorb the character in the viewers space, rather than trying to adapt the viewer to the characters world.
The display is 3D (without glasses) with parallax, and the background is from a camera behind the monitor.
Goose Vision
I want the audience/viewers reaction to be surprised and exclamatory; something like "Is that a goose with a tuba?!", and not "what is this for" or "How did you make it 3D?". So I am trying to absorb the character in the viewers space, rather than trying to adapt the viewer to the characters world.
The display is 3D (without glasses) with parallax, and the background is from a camera behind the monitor.
The interlacing takes up a huge amount of the processing power, so the live version will have only a few responses, triggered manually (AI reactive responses are slower, and they are unreliable as the input cycles overlap the processing).
The Kinect 3D Sensor
The Kinect generates a lot of information about the environment, it might also be useful in defining the camera's position in the virtual space and grabbing a quick 3D model.
The two limiting factors for outdoor are sunlight, which has a lot of infrared (the Kinect tracks depth with an array of IR laser beams), and the Kinect is only designed to range objects within 10 to 20 feet of the sensor. I am looking into lenses and filters for both issues.
The Microsoft SDK (seen above on a running on a laptop) is very stable, but it does not communicate with Isadora, so I could not use it to do head tracking in the field, but it might be useful for making a live 3D map of the camera position.
Friday, January 1, 2016
Digital "Pop Out" Book
I've been scooped!
Mine is going to be better.
I like the tilt action for the fruit.
This one is terrible. Not like a book at all; this is a Flash game with some words in it. It is like they wanted to increase your kid's ADHD. But I like the candle illuminating text on the floor.
Subscribe to:
Posts (Atom)