4/28/2010

Wikipedia Eater

Thanks to stimulation sparked by my favorite new Twitterer, @exoplanetology, I just went on a lovely little wikipedia tour. Allow me to curate for you:

Dyson Bubbles on NextBigFuture:

Speculative physics insanity

Unlike the Dyson swarm, the constructs making it up are not in orbit around the star, but would be statites—satellites suspended by use of enormous light sails using radiation pressure to counteract the star's pull of gravity. Such constructs would not be in danger of collision or of eclipsing one another; they would be totally stationary with regard to the star, and independent of one another. As the ratio of radiation pressure and the force of gravity from a star are constant regardless of the distance (provided the statite has an unobstructed line-of-sight to the surface of its star), such statites could also vary their distance from their central star.


Dyson Spheres (Wikipedia):

The star eater

A Dyson sphere is a hypothetical megastructure originally described by Freeman Dyson. Such a "sphere" would be a system of orbiting solar power satellites meant to completely encompass a star and capture most or all of its energy output. Dyson speculated that such structures would be the logical consequence of the long-term survival and escalating energy needs of a technological civilization, and proposed that searching for evidence of the existence of such structures might lead to the detection of advanced intelligent extraterrestrial life.
Since then, other variant designs involving building an artificial structure — or a series of structures — to encompass a star have been proposed in exploratory engineering or described in science fiction under the name "Dyson sphere". These later proposals have not been limited to solar power stations — many involve habitation or industrial elements. Most fictional depictions describe a solid shell of matter enclosing a star, which is considered the least plausible variant of the idea.


The Penrose Process (Wikipedia):

basically a centripedal brake for a black hole. Made out of light.

The Penrose process (also called Penrose mechanism) is a process theorised by Roger Penrose wherein energy can be extracted from a rotating black hole. That extraction is made possible by the existence of a region of the Kerr spacetime called the ergoregion, a region in which a particle is necessarily propelled in locomotive concurrence with the rotating spacetime. In the process, a lump of matter enters into the ergoregion of the black hole, and once it enters the ergoregion, is split into two. The momentum of the two pieces of matter can be arranged so that one piece escapes to infinity, whilst the other falls past the outer event horizon into the hole. The escaping piece of matter can possibly have greater mass-energy than the original infalling piece of matter. In summary, the process results in a decrease in the angular momentum of the black hole, and that reduction corresponds to a transference of energy whereby the momentum lost is converted to energy extracted.


Kardashev Scale (wikipedia):

Soviets tended to think big.

The Kardashev scale is a method of measuring a civilization's level of technological advancement. The scale is only theoretical and in terms of an actual civilization highly speculative; however, it puts energy consumption of an entire civilization in a cosmic perspective. It was first proposed in 1964 by the Soviet Russian astronomer Nikolai Kardashev. The scale has three designated categories called Type I, II, and III. These are based on the amount of usable energy a civilization has at its disposal, and the degree of space colonization. In general terms, a Type I civilization has achieved mastery of the resources of its home planet, Type II of its solar system, and Type III of its galaxy.


Malthusian Catastrophe (Wikipedia):

peak metabolism.

A Malthusian catastrophe (also called a Malthusian check, crisis, disaster, or nightmare) was originally foreseen to be a forced return to subsistence-level conditions once population growth had outpaced agricultural production. Later formulations consider economic growth limits as well. The term is also commonly used in discussions of oil depletion.
Based on the work of political economist Thomas Malthus (1766–1834), theories of Malthusian catastrophe are very similar to the Iron Law of Wages. The main difference is that the Malthusian theories predict what will happen over several generations or centuries, whereas the Iron Law of Wages predicts what will happen in a matter of years and decades.


The Omega Point (wikipedia):

WHAT!?!?

The Omega Point is a term used by Tulane University professor of physics and mathematics Frank J. Tipler to describe what he maintains is a physically-necessary cosmological state in the far future of the universe. According to his Omega Point Theory, as the universe comes to an end at a singularity in a particular form of the Big Crunch, the computational capacity of the universe (in terms of both its processor speed and memory storage) increases unlimitedly with a hyperbolic growth rate as the radius of the universe goes to zero, allowing an infinite number of bits to be processed and stored before the end of spacetime. Via this supertask, a simulation run on this universal computer can thereby continue forever in its own terms (i.e., in "experiential time"), even though the universe lasts only a finite amount of proper time. Tipler states this theory requires that the current known laws of physics are true descriptions of reality, which he says implies that there be intelligent civilizations in existence at the appropriate time in order to force the collapse of the universe and then manipulate its collapse so that the computational capacity of the universe can diverge to infinity.
Tipler identifies this final singularity and its state of infinite informational capacity with God. The implication of this theory for present-day humans is that this ultimate cosmic computer will be able to run computer emulations which are perfectly accurate down to the quantum level of all intelligent life which has ever lived, by recreating simulations of all possible quantum brain states within the emulation. This would manifest as a simulated reality. From the perspective of the recreated inhabitant, the states near the Omega Point would represent their resurrection in an infinite-duration afterlife, which could take any imaginable form due to its virtual nature.
Assuming that achieving the Omega Point is physically possible, Tipler says this would be accomplished by "downloaded" human consciousness on quantum computers in tiny starships that could exponentially explore space, many times faster than biological human beings. Tipler argues that the incredible expense of keeping humans alive in space implies that flesh-and-blood humans will never personally travel to other stars. Instead, highly efficient uploads of human minds ("mind children" as Tipler calls them, they being the mental uploads of our descendants, or of ourselves[1]) and artificial intelligences will spread civilization throughout space. According to Tipler, this should likely start before 2100. Small spaceships under heavy acceleration up to relativistic speeds could then reach nearby stars in less than a decade. In one million years, these intelligent von Neumann probes would have completely colonized the Milky Way galaxy. In 100 million years, the Virgo Supercluster would be colonized. From that point on, the entire visible universe would be engulfed by these "mind children" as it approaches the point of maximum expansion. Per this cosmological model, the final singularity of the Omega Point itself will be reached between 1018 and 1019 years.




Freak out.

So I was thinking, about the time I got to the article about the Omega Point, that maybe with the cancellation of the human space program, we are reaching a turning point. If, for some reason, society does not completely collapse, perhaps this will be the high-water mark for humanity conquering the stars. Maybe we'll look at this period as the time when we wised up, and stopped sending fragile corpses up into radiated space wrapped in tin foil, just as now we look back at when we used to send men miles under the ground to dig up coal at great expense and many lost lives. Oh... wait a minute.

But seriously, I'm not saying people will never go into space again. Earth orbit is the next area just PRIMED for gentrification. A few starbucks, a couple of leash-free dog park satellites, and we could totally start Facebook groups to complain about the lack of magnet schools at the Lagrangian points. But space exploration? Why bother? What is an astronaut going to do on Jupiter? Sweat in canned air while s/he watches a flag disintegrate under the gravity?

But heck, we're using data-linked robots to bomb people in Central Asia, so why wouldn't we use robots to terraform Titan, and settle for the digital postcard?

"Because it is there?" Maybe space will be solely the rich person's pursuit. They'll spend years and millions of dollars training to go walk on asteroids and lesser-known moons. Then, fifty years later, after they make it back to earth, they can talk about the multi-solar eclipses in the portion of the asteroid belt now named after them with tears in their eyes, while they have a cocktail at the Explorer's Club. Meanwhile, the rest of us will be crowd-mapping Plutonian mineral deposits via our HD, 3D displays from our compounds here on good old earth, for an inflation-equivalent 30 cents an hour.

I am not nearly the impressive nerd-knowledge zone that Exoplanetology or NextBigFuture is, but it seems pretty clear to me that once you surmount the problem of earth's initial gravity and atmosphere, slinging a few thousand camera-equipped smart phones into the galaxy is way easier than trying to ship a human there. At least until we get gravitational drives, or spin-dizzies, or something else I haven't read about on Wikipedia yet. But there's an awful lot of pre-stage Kardashev 1 left before we get there.

More seriously (actually the serious part): I do think we are primed for a lot of new speculative thinking about the future of space exploration. Now that the human narrative has been derailed, it's the perfect time to start thinking fresh.

In the meantime, as I said earlier, I'm into fusion power for the helium exhaust. Here's to sending massive structures into the air, only to have them come back down again.

4/21/2010

Against Theory, For Machines

I wrote this yesterday:

Rather than come up with an overall "theory" of this clip, I just want to draw your attention to some elements. I've been working on a theory of criticism lately, of various narrative arts, that instead of trying to apply an overall "theorem" to the piece or genre or form, instead finds various machinations of how any such theory will work. I'll go into this later when I have my ideas more solidified, but for now, let's leave at this: if you were to approach a complicated machine like a car, you might not try and explain "how the car works", but instead pick out some unique and emblematic parts of the car, and explain how those work. You might explain a wheel and axle, or a disc brake, or a 4-stroke combustion engine. These are all parts of how the car works, but do not attempt to explain it in totality, which would necessitate various economic theories of production, design theory, raw materials processing, social forces in transportation, and on and on. Nothing works independently of anything else. But to learn anything, we have to start taking things apart. And you start taking apart most things by unscrewing some bolts, and unplugging a few wires, looking carefully while you do so.


I was reminded of this again when I started to pick apart Kazys Varnelis' essay on history, temporality, and modernism.

I have this pet peeve about cultural studies (not that the essay is cultural studies, but you'll see where I'm going). I appreciate the work of cultural studies, of not throwing away anything in the pursuit of evidence about what exactly we, as a culture, might be up to, philosophically, historically, anthropologically, and so on. But it leads to a all too common problem; the theorist, in the effort to present as much evidence as possible and still tie it all together, is forced to wash over certain pieces, or many pieces, in the effort not to spiral out into a Arcades Project style work. Rather than, say, returning to the car example, "let's start by talking about how a combustion engine works"--starting with a particular instance of the material--they begin with a general conclusion: "a car works by burning fossil fuels and creating CO2." Which is true, of course, and important. But while it is culturally important information about a car, the statement is not promoting any understanding about how a car works, or more importantly, how we might fix the negative effect described.

Back in my (more) headstrong youth, I had an unfortunate run-in with a cultural studies professor, in which I said that her work, which collected examples from pop culture that "caused young women to have a poor body image" was obvious, and superfluous. Clearly some women have a poor body image, and naturally, some elements of pop culture are related. But the research said nothing about the link, the mechanism, or which women might be affected, and to what degree. It presented the same statement about the car and CO2. It presented a generalized conclusion about media and body image, by merely identifying some evidence that were probably the cause, and some evidence that were probably the effect.

Although I feel badly about acting like such a jerk about it, I still think the criticism is relevant. It is really easy for a reader already harboring the preconceived idea of A -> B to fall into the trap of spending time reading theory that is essentially tautological, or at the very least, generalizes conclusions rather than studying the mechanism. The mechanics of our theory is the basis of being about to do anything about it. Anything else is pure mythos, a nice story that makes us feel better about the way things are without intentionally affecting them. Successful theory, or stories for that matter, change the way we think about things, at the very least presenting an angle for cross-reference of our experience and perceptions. Rather than simply be a jerk (which like generalizing, is more than easy enough) I now hope to draw out at least a few mechanisms when I encounter something perhaps leaning towards the urge to generalize. Show someone how to change the oil filter maybe, or point out a coolant leak. Have a few beers and discuss the pros and cons of alternate fuel sources. That sort of thing.

I actually don't know a lot about all the details of cars. But I know a little bit about metaphysics, and psychoanalysis, so this is where my explanations of epistemological mechanisms often comes from. (One day, I'll write a metaphysical Chilton's guide to the unconscious. The Ego and the Id, but with better diagrams updated for the current models.)

Everything that I'd want to add, as diagrams of metaphysical mechanisms, to Varnelis' essays, I've actually already written, in a long drawn-out annotation of Heidegger's Section 73 of Being and Time: "The Vulgar Understanding of History and the Occurence of Dasein." Which you should dig through by all means, if you are some kind of meta-structural engineer with a taste for blog writing.

But what I really want to get at is the metaphysical distinction between temporality in Heidegger's notion of the "world-historical", and temporality in his notion as the "temporality of being". In one sense (I know what I just said about generalizing, but this is a blog post first of all, and I can feel your attention waning, and I'm trying to meet those unacquainted with Heidegger halfway) this is the difference between the past and the present. The element of the past in things is its connection to a world of the past, a certain time that is not this time. Whereas, our sense of "now", our connectedness between our consciousness and the things we are conscious of, is part of Dasein, or authentic being. Heidegger is so firm on separating these two notions of temporality that he calls the former "secondary historicity" and the latter "primary historicity", a clear privilege of being over objects.

The reason I'm so interested in psychoanalysis, is that despite anything you may feel about the pros and cons of the clinical method, it is one of the most compelling attempts at theorizing the mechanics of the brain, from a metaphysical perspective. In other words, while neuroscience is the best at theorizing the mechanics of the brain from a chemical perspective, psychoanalysis excels at finding the roots of philosophical issues in our heads. Where does the notion of space and time exist in the function of the brain? Well, there are areas of the brain that affect our conception of time and space. But what is the metaphysical relationship between time and space, such as existing inside the chemistry of the brain? What, in other words, is time, and what is space, such as they exist in the brain? It is a phenomenal reduction: a reduction of metaphysics to the metaphysical, that which can be affected and studied in the means through which we perceive it. Neuroscience is a chemical reduction: a reduction of consciousness to what can be affected and studied by chemistry. Very few studies have taken place that combine these two. What would happen if we asked a renowned metaphysician to review his/her theories while certain areas of his brain were stimulated with electricity? Drug users are probably the closest to this concept, ironically. By taking unscientific risks, and by describing the world in humanistic generalities (read: new agey hippie nonsense) they are re-theorizing the world as they re-organize their brain chemistry. Maybe the Third Eye is more philosophically and scientifically relevant than the credit we give it. But that's another tangent, another part of the car.

But as far as temporality goes, what Heidegger distinguishes as "primary historicity" or Being, in my opinion, deserves no more authenticity than "secondary historicity", or world-history. Being, our conscious knowledge of the present, and "nowness", is no different than our appreciation of the historical value of objects.

The best example is a historical artifact. An electron tube is an artifact--no longer current technology, it has since been replaced by newer objects. But it still acts as an electrical component, despite its apparent "age". An even better example is an arrowhead. When I was a kid, I found any number of rocks, which I was sure were "actually arrowheads". When does the rock stop being a rock, and become an arrowhead? In my six-year old mind. We ascribe value to objects, label them with significance, based on a large network of semiotic structures in our mind. The "age" of the object is actually, not relevant. It exists only simultaneously with its current presence, and its current significance.

But there is an lingering element of "authenticity", Heidegger's privilege, given to certain objects within their semiotic language. An amplifier built with tubes is more "authentic". This is the meaning of its "age" to us. My arrowheads were more important than other rocks because of what my six-year old mind believed them to be. The authentic form of temporality Heidegger identified is no more than the most significant signifier in our language of time--Being, the present. The zero of the number line, point from which duration stretches, the ego as the surface of the consciousness, the Signifier, the body without organs, a unified monad, to which everything else must attach itself if it is to be meaningful.

The concepts I just referenced as the authentic point of signification all share a characteristic: they don't really exist. They are formed from their sub-infrastructural elements. They are the architecture, the perceived significant art, the "pure" expanse of color formed via the absorption of visible radiation, the "blank" wall, made from conglomerated minerals. They are the simplification, the generalization, the "I" in the ecology of consciousness, The "Mother Nature" in the chemical dynamics of biology. The vast field of significations of time and temporality only seem to come to a head in our sense of "now", when really the writing stretches all across and through the wall, and on any visible surface, and in any perceivable sound, in the outlines of any recognizable shape, and it the infinite extensions of any cognizable dimension.

Which is a lot of different places, both "real" and "not". It's a whole lot to make sense of, and a lot to form into any unified theory of consciousness, time, space, epistemology, philosophy, physics, or for goodness sakes, the Internet. But this is a place to start.

A car was not built from the driver's seat.

But luckily, we don't need to build a car to be able to drive one. What I want to say about Varnelis' essay is that there seems to be a bit of confusion, some back and forth between epochs of the "world-historical" and implementations of "temporal being". Even Heidegger seemed a bit confused by the distinction, and naturally so: in the modernist age, it would have been very difficult for anyone to reject the overwhelming authority of the "I", of the deep-rooted primary temporality of consciousness. Lacan capitalized the Signifier with reason. Back then, you better believe it was a proper noun.

But here's the thing about temporality, and accordingly, atemporality--the lack of importance of the primary historicity in current society is, actually, its own proof that it was never really so primary. How does a word get purged from a langauge? Everyone simply stops using it, without realizing that they don't use it anymore. The tip of an iceberg may not tell you anything about the shape of the ice underneath, but if you don't see the tip of an iceberg, then there's nothing underneath.

We are becoming predominantly fluid in our conception of the world-historical. Network culture, etc. Meanwhile, we are relying on temporal-Being less and less to define the world-historical. Have you noticed that April Fool's jokes thrive on the Internet? It's because Reality, the plane upon which a joke rests, is all warped on the Internet already. April Fool's jokes are proliferating, because our concept of world-history is now based on Wikipedia. Irony, trickery, double-meaning, and the good old nudge of the elbow are insinuating themselves into our concept of Truth. We roll with the punches, and nod along with the joke, because not even the serious is serious anymore. And yet, we still get things done.

So, the legacy of "ends of history" are not really important anymore. Any theory needing to call itself post-____ is not really relevant, either as a positive or negative example, because it is still naming itself "I" based on an authority of a temporal relationship. We don't need this evidence of increasing meaninglessness of historical narratives. We get the sense that we've already seen this episode, or at least maybe the episode this is a remake of. You don't need to be able to read a watch to know that it is stopped. You can hear the absence of ticking.

I always thought the the first sign of the end of grand narratives is that we'd look up and say, "Narratives? what are those? Is that like a riddle? Or a song?" When we say, "timeline? You mean like Twitter posts? Or youtube comments?" Then we'll know that causality is really over. Time will not be over just by theoretically "wanting" it, it will be over when you lost your watch, and you forgot to care.

This doesn't mean that there is nothing to do, or that we shouldn't bother talking about it until it's already happened. Because we are doing it, and we are talking about it. Just some of us more than others. We just need to stop theorizing connections, and just connect, which theory does, to a large extent. But where it really connects is not where it attempts to generalize about theory as a whole (though we are all guilty of this, either to a small or a large extent). It connects with Tab A going into Slot B, with the material production of new significations that replace the old connections. Let's go out there and find the atemporality. Let's find it in Marx, let's find it Galileo, let's find it in Irigaray, let's find it in cave paintings. Let's find it in places that aren't books, or the Internet. Let's bring these tools, these working parts back to the garage, set 'em up, and start actually using 'em to make stuff, not just put them on the shelf. Share 'em, too. Metaphysical tool library. When we are finding little parts of the "I" in all these supposedly "historical" things, and when we are using and moving back and forth across different epochs of time as easily as we perceive and express ourselves, then we will already be there.

4/20/2010

"Not here in the way we are"

"...to the human animal, that can take a wishful dream and give it a dimension of its own..." - Rod Serling



The Twilight Zone is masterful television, naturally. Not for pure originality, but because it encompasses so many of our speculative-fiction tropes into one format, namely, primetime television. The series was a Modern Brothers Grimm of the uncanny and the intriguely eerie.

In keeping with this particularly modern narrative role (is there anything that really says "Post-War Society" like the image of Rod Serling with a thin black tie and a cigarette in hand? File it next to Elvis' hips, and James Dean's car wreck.) most of the stories of the Twilight Zone revolve around psychological issues. Memory, personal appearance, perception, crises of the ego, and paranoia flow through these tales. And naturally, if we are discussing the modern human condition and its fissures and crevices, time is going to be one of the main characters, the lurking dark, waiting to finally show itself like a metaphysical Cthulu. The mocking trope of the series, "...in the end, it was really people," is somewhat of a distraction. Because it in the end, for all people, "it was time, all along."

"Time, time enough at last," is the horrifying conclusion of a particular character upon finally grasping, via the plot's journey, long after we have all discovered it, that all of humanity has been destroyed. Humanity, and humanism, is the story of the struggle against time, or the struggle with time, wrestling with its flow like swimming in the metaphorical river that "won't stay the same river twice". Remembering, forgetting, paranoia for the future, anxiety for the past, running from things, running to things, "before it's too late", "because it is too late."

In this particular clip I've featured at the beginning of this post, much of this comes to a head. I could tell you the back story, but the images are apparent enough. A woman, lamenting the still image on a wall, walks in a room distraught, and switches on a film projector. She immediately relaxes, and is caught in the scene. A second woman enters, and screams in panic. A man shows up, questions the second about the first's whereabouts, and then determined, switches on the projector. The missing woman is on the screen, in the film, addressing all the other "characters" as her friends. The man calls out to the screen, in a screen-within-a-screen shot, the hallmark of the modern film self-consciousness. Bizarrely, horrifically, the image responds, long enough to wave goodbye and disappear, as the reel runs out, and the image disappears. The man gets up to leave, and walking into the hallway, finds the missing woman's handkerchief, and we realize suddenly that the scene on the film was filmed in the hallway of the house in which they remain.

Wow. A lot to digest here.

Rather than come up with an overall "theory" of this clip, I just want to draw your attention to some elements. I've been working on a theory of criticism lately, of various narrative arts, that instead of trying to apply an overall "theorem" to the piece or genre or form, instead finds various machinations of how any such theory will work. I'll go into this later when I have my ideas more solidified, but for now, let's leave at this: if you were to approach a complicated machine like a car, you might not try and explain "how the car works", but instead pick out some unique and emblematic parts of the car, and explain how those work. You might explain a wheel and axle, or a disc brake, or a 4-stroke combustion engine. These are all parts of how the car works, but do not attempt to explain it in totality, which would necessitate various economic theories of production, design theory, raw materials processing, social forces in transportation, and on and on. Nothing works independently of anything else. But to learn anything, we have to start taking things apart. And you start taking apart most things by unscrewing some bolts, and unplugging a few wires, looking carefully while you do so.

So, like, look at THESE parts, man:

- Memory: memory is an attempt to re-live time, to rethink it, to put yourself back into a "mindset", to remember "how you felt" then. It is an attempt by consciousness to replay itself.

- Photography: still pictures are good, moving pictures better. Moving pictures are still pictures, played 24 frames per second, pulled across a lens, with a shutter to silence the perception of the film's motion, and instead create the illusory perception of time passing. The image moves, but the screen stays still. This might as well be the logic circuit of the spark plug timing computer in this clip. I refer you to electromagnetic physics, i.e. Deleuze's Cinema 1 & 2, and countless other works still trying to puzzle out on paper how film actually "works".

- The "Screen", or the substance of film: a white sheet, where "everything happens". The blank slate of consciousness of film. Screen's within a screen tell us "this is not actually happening; you are watching it." Translation: this is not actually happening; you are remembering it. The difference between neurosis and psychosis is that a neurotic can still tell the difference between fantasy and reality. However, sometimes psychotics are actually better off in terms of mental stability and happiness. Sometimes.

- Film vs. Video: What is real, and what is staged? The advent of video means the advent of "reality". Snap shots of motion, hence what's "real". Video has a living pulse. It is everpresent, in a way that film can't be. You cannot see video data. It can be erased, and re-recorded. Now it is digital, and exists without physical form. Film requires a theater, a screen. Video lives in the TV. Video is what the TV breathes, eats, shits, and fucks. Video works with the TV, in the TV, against you.



- Double exposure: What happens when a film is wound around the reel? Does it breath, in the way that videotape breathes on the shelf, desiring us to play them, to insert them like the cartridges they are? What happens to the woman in the clip? If the film is played back, will she appear again? Or is she a ghost, inhabiting the film, moving with an uncanny life apart from the captured individual frames? Or is she gone forever, the lingering moments on the end of the film her vanishing image, her shadow?

- Camera anxiety: I first discovered camera anxiety while watching The Man with the Movie Camera (clip below, another crazy earth-moving vehicle to be dissassembled). When watching a shot of a car or a train, I simply observe the car or the train. I am an innocent voyeur. When I see the shot with the camera man filming the car or train, I am suddenly aware, "those people are being filmed!" When it goes back to the original shot, I cannot shake the feeling. I am sitting with the camera man, and we are looking at the car and train. Anyone can see us, looking at them. It gets worse: in the second shot, I have a delirious realization: "there must be a second camera somewhere. Where is the second camera? Where is the camera that is filming us filming?" When do we see characters in film and video as ourselves? Are we better off or worse for knowing the difference?



Now back to your regularly scheduled phenomena.

4/19/2010

More Tube Photos

Some more tube photos, since they seem to be popular. Comments refer to the picture below the text. Click on any for big.



30 second exposure, lit only by candlelight, below.



My Argus Autronic will double expose on the last frame on the roll. Leads to some great unexpected shots, because the camera will just keep clicking away until you realize it.



Lomography fish-eye camera here.



The Lomography camera also has an open-shutter function, but no tripod mount. Hence, the table in the bottom of the shot. And I had to count to thirty in my head.



Taken from an iPhone 3GS, with some sort of filter program. Shot by my friend Emily.



And, the tubes were still there in the morning!



Another iPhone 3GS with app filter.



iPhone 3GS. Me.

4/17/2010

Tube Installation

I hinted via Twitter that I was working on something, and here it is. I was filling our main room and dining room with a lattice-work of cardboard tubes.



This is what happens when your partner leaves you alone in the house for three weeks. Those little ideas in the back of your head leave the back of your head, and you spend your evenings doing this.




M was, of course, thrilled to see it when she got home. We think alike.



The installation is made from 1" diameter, 24" long cardboard tubes, and stretches from floor to ceiling in a space of roughly 25' x 15' x 10'. It is free-standing, held together only by the strength of the tubes, and hot glue.




The tubes were rescued from a recycling dumpster. I used about 500. The hot glue was bought in a late night frenzy when I first had the idea, but was unable to find our own hot glue stash. Sometimes, inspiration won't wait.


I've made other tube sculptures by cutting the tubes to the right length, but for this I left all tubes at their 24" length, flatting the edges to make bevels where necessary.


This gave the it a strange 3D geometrical shape, proportional, and yet warped. I remembered the studies where honeybees are given LSD, and then produce warped honeycomb. The tubes are geodesic domes under a similar treatment. Not LSD, just warped, home-industrial boredom. An excess of raw material leads to products of irrational design.


But at the same time, it isn't irrational. There are pathways of open space leading from the front door, to the couch, to the kitchen. From the couch to the stereo. This was the only space I was using in the room; the tube occupy unused space.


In some ways, its an externalization of things I was feeling. I was lonely, a little bored. I was feeling frustrated that many creative ideas I have are put off, due to insufficient resources or time. This idea then launched out of me, in an aggressive reaction against the sort of conservative thought that lead one to, say, not fill one's living room with tubes. A rejection of all the reasons why not.


M, upon arriving home, was inspired with ideas to augment and add to the lattice work. In this way, she is rejoining my space, and adding to it, as she always does. It's now our space again, full parts, empty parts, and all the angles in between.


The video will probably give you the best overall picture. My camera with a wide-angle lens is not working.

4/05/2010

MOMA acquires @

I've recently been a bit bored by the Internet, but this, this REEKS of awesome.

MoMA’s Department of Architecture and Design has acquired the @ symbol into its collection. It is a momentous, elating acquisition that makes us all proud. But what does it mean, both in conceptual and in practical terms?

[...]

While installations have for decades provided museums with interesting challenges involving acquisition, storage, reproducibility, authorship, maintenance, manufacture, context—even questions about the essence of a work of art in itself—MoMA curators have recently ventured further...

The acquisition of @ takes one more step. It relies on the assumption that physical possession of an object as a requirement for an acquisition is no longer necessary, and therefore it sets curators free to tag the world and acknowledge things that “cannot be had”—because they are too big (buildings, Boeing 747’s, satellites), or because they are in the air and belong to everybody and to no one, like the @—as art objects befitting MoMA’s collection. The same criteria of quality, relevance, and overall excellence shared by all objects in MoMA’s collection also apply to these entities.


The link above then goes on to describe the history of the "@".

Behold, the work of museums in the atemporal age. Acquiring what belongs to nobody, and what everybody already has. Because what is really important is what we all already have, but what we fail to acknowledge with the curation a museum can bestow.

4/03/2010

iDuality

The one great part about dualities is what the philosopher's of the dialectic call "the aufhebung", which is a German word meaning something along the lines of "overcoming", or "overpassing", or "surmounting". As the story goes, in a dialectic, or conversation of duality, first there is the Thesis: the first argument. Then, there is the Anti-thesis: the counter-argument. Then, at long last, there is the Aufhebung, when both arguments of the duality are surmounted, and a fresh Thesis can develop. This, according to some, is the way history makes progress.

I tend to think that history is a long continuum of basically the same theses and anti-theses, and although the wise guys who live on the mountain top (or on the Internet) see where the aufhebung SHOULD be, instead watch the rest of the world cycle back through the same problems again, and cackle to themselves, and stuff another handful of funny herbs in their mouth, and wave their genitalia at society.

Speaking of which, the espresso I'm drinking right now is excellent.

My partner, who is an artist/folklorist, is great to talk to, because she will often take the opposite of an argument that I will, but she will be coming from the perspective of an artist/folklorist, as opposed to my perspective as a writer/philosopher/atemporality-early-adopter, which are very similar, and yet still very different. For example, try and imagine orienting the Creative Commons anarchistic open-semiotic common property position with the arguments protecting the rights of native legends told by people who have never seen a tape recorder in their life. These sorts of theses and anti-theses, cast in example which are not the typical Internet Corporate Rights-Holder vs. Internet Mass User scenario, lead to some very interesting aufhebung directions, because they allow for a re-stratification of the old arguments, opening up new lines of escape from the patterns of thought that led us into the duality trap to begin with. You really see more about the basis of the ideas of the argument, and the bottom layers of the material the argument is MADE from. You can break past what you might think "property" and "creator" mean, and start getting at what "intellectual material" and "speech" might mean. For me at least, it breaks me out of the humanist and liberalist boxes, filled with mental traps about what "rights" and "property" and "freedom" supposedly entail, and the moral presumptions of each.

When I, against my better judgment, clicked on the links to Cory Doctorow's recent BoingBoing post about his righteous hatred for the iPad, and then the folks at Gizmodo's equally righteous and offended rejection of his hatred of the precious gadget, my duality warning light flipped on, and the Aufhebung klaxon began sounding. (Both are mounted on the wall right above my computer.) I knew they would start sounding. Of course. But it's like when you pull a fire alarm, and then you jump anyway when it starts ringing. You just can't help it.

I love Cory. How can you not? I don't need to extol his many virtues. I've never even seen the guy in person, and I can tell how he's just a generally nice, positive, devotedly idealistic guy. Or at least I am assuming he is. But man, does it ever kill me what a liberal he can be.

And I don't mean that in the knee-jerk, hey-I'm-not-actually-conservative-but-let's-be-reasonable-here category that Joel Johnson fell into with the Gizmodo response. Because, that's what people generally do when they hear someone take the liberal-or-death tact.

I mean it from the same point of reference I meant it all the years I've clashed with liberal protesters over their vegan-or-death, more-organic-than-thou, anti-sweatshop or nakedness, anti-globalization positions. And these are issues on which we AGREE. I think Cory is generally right. I think anti-globalization protesters are generally right. But they are just so locked in to their own argument, they are not willing to study the actual material of their argument any more. It's not that I care that Cory is supposedly "alienating his audience", or anything like that. I could care fuck all about alienating anyone. I'd just like someone to actually think for a moment, and then form an opinion from that, instead of just throwing the iPad (or anything else), on the Liberal-ization Meter, and seeing if the needle points to positive or negative.

So the other day, my partner and I were discussing Marina Abramovic's recent performance at the MOMA (which I believe is still going on). I did not see it, but my partner did, and she expressed annoyance at some of the actions of the other people who were there to see it. If you don't know Marina Abramovic's work, (first, go read about it) it is performance art heavily weighted towards creating reaction among the viewers. As we talked about it, our discussion came around to her famous/infamous Movement 0, 1974. Long story short, she put a number of items on a table, including weapons, and suggested that the audience do whatever they liked to her with the items. Climactically, one audience member did take the loaded gun from the table and pointed it at her head. Another audience member pulled the gun away from the person holding it. We posed the question of whether or not our reaction, had we been there, would have been to pull the gun away from the person pointing it at Maria Abramovic, because part of her art is the reaction of the audience members. We then discussed the ethics of intervening with other actions of audience members, on different levels of inappropriate or annoyingness, and then the set-up of the art piece, including museum staff charged with allowing people to do or not do certain things.

Regardless of what you think about Marina Abramovic or performance art in general, and I think this distills a very interesting aspect of human action involving other humans. Human action, like our dialectical conversations, are basically cycles of action and re-action. Even to respond to a person's action by doing nothing, is in itself an action, and an action that is engendered by the original action, and will in turn engender more action afterwards. This action could change, and be a progression, an aufhebung if you will, or it could repeat. Any moment is subject to this micro-analysis, not just those labeled as performance art, with Performer and Audience as concrete, defined positions.

Back to the damn iPad. I refuse to believe that Apple owes anybody anything. Nor to I think there is anything that a Consumer should or should not do. There is no moral ethic to action in a market place. There is only what you are trying to do. Apple clearly is acting with intention of furthering their business, in a particular model of what they think will succeed. Consumers, on the other hand, are not trying to do anything, or rather, are trying to do very different things depending on who they are, and making decisions that are not nearly as unified as a corporation with a very strong leader at its head. Not all of these decisions are rational, and very few of them are aligned to any sort of ethical or moral compass. When you put a moral compass onto this situation, and then accordingly try and walk in a straight line regardless of the terrain, you end up looking stupid. Cory's arguments sound sad, like he is simply pissed that this thing is going to sell because he disagrees with it. If it is really so useless, then no one will buy it. Or, the people who will buy it will end up in computer-user holes, and will stagnate. Only a liberal, staring at the compass in his/her hand, would believe that someone "should" do anything simply because of the position of an action according to a compass.

I look at Apple's products as performance art. If they want to post armed DRM guards in the room, it's their performance, so they can do what they like. It may make the art worse. Or, it may cause audience members to attack the guards, in order to view the art in the way they want to. But Apple will do what it wants to, and people will react. My opinion is, that most of this spectacle is for spectacle's sake, and to my mind, this is pretty shitty art. But I've always preferred art instillations in homes to diamond-encrusted skulls and formaldehyde sharks.

I hate the duality building between Consumer and Creator. This is a stupid duality, isolating each from each other. It takes away the fun of starting your own band, and the fun of jumping on stage during a punk show. I don't want to have limit my apps because I have an iPhone. And I don't want to have to learn Python so I can debug my Linux wireless drivers. Neither of these sides wins. And neither of the sides attempting to act or re-act with this duality wins. If you are a creator, you can only create for one or the other, and if you are a consumer, you can only consume one or the other.

I don't think a middle ground is the solution either--frankly, I'm unimpressed by the so-called "creator" tools anyone with an iPad will be able to use. Some of the best consumer programs I use on my phone are free, buggy, and generally "un-iPhone-smooth". This is pretty analogous to my experience as a computer user since the time I was six. It never works as easily or as well as you want it to, and the thing that would work better is too expensive to get, so you find a way to make it work all the same. I know a lot about computers, but can't program anything more complicated that Excel or a TI-83.

This is not a middle ground, nor is a solution. It is a continuance of the norm for me, as I've experienced it. Many other people will no doubt continue to find their own niches, whether it involves compiling Linux install files, or emailing their son/granddaughter for computer help. I doubt Apple's products will ever be as idealistic as they think they are, nor will they destroy any Maker/Computer club.

What I wish, however, was that people would stop acting according to the ideals of what they think "computers should be" and start designing them according to how they will be used. This means they would have to ditch the "target market" completely, because as it turns out, no one in the target market will use it the same. What if computers were designed for individuals? What if the actions of a computer were defined with a specific user's reactions in mind? I am tired of hearing that UI's are "slick" or "clean" or "complicated" or "easy" or "hard". UI stands for "user interface". Shouldn't the only thing a UI does is to interface with the user? Instead, they appear to be designed like performance art, craftily imposing certain reactions upon the audience, and "failing" when they don't engender the precise reaction they are supposed to. There is no moral implication of iTunes--but at one point or another, it has either satisfied or failed to satisfy every person who has used it. Each of the reasons should be studied, and included into the next release of iTunes. It may be impossible to satisfy everyone--but then you do not have a true UI. You have a piece of art that will either be popular, or will be unnoticed.

We are now at the point where people are so fascistically tied to a UI, they are neurotically afraid to leave it! The differences between UIs are so great, and we base our re-actions to a UI so heavily, we can no longer simply "use computers", but only use particular UIs! The UI is the computer for us, in a bad way. We have trained ourselves to re-act to the location of menus, rather than what the actions we click are doing. If we actually saw the Mona Lisa on the table in front of us, rather than behind armed guards and bullet-proof glass, would we know what it was? If we ran into Marina Abramovic on the street doing her food shopping, would we smile as we would to any other human being, or would be overtaken by the urge to say or do something meaningful in re-action to her person?

Luckily, for me anyway, computers are almost to a point, and my knowledge of computers is almost to a point, where I can build my perfect UI, and I won't have to hope that someone builds it for me and sells it to me. A computer is, in this age, an Interface to digital data. My perfect UI will actually be a network of computers and devices, allowing me to re-act to the world of digital data in the way I want to. It might include an Apple product or two, and it will also most definitely have a Linux product in it. This UI will be the most perfect freedom, regardless of whether or not I can share music online or lend ebooks to my friends, because I will be able to do everything I want to do when I want to do it. I'm guessing my perfect UI will never be complete, because there are new things I want to Interface all the time.

iPad haters, and Gadget lovers, can't we come together, and just build our interface? If Cory spent all the time he preaches against DRM offering free Linux tutorials online, I'd be a lot more free of a computer user. If Gizmodo spent all the time they spent fondling things that I can't afford connecting the dots between these consumerist nodes with actual usage practices, I'd love my gadgets a lot more.

New pearl of wisdom: All the world is art, and 95% of art is boring.