1/31/2009

Hmm... Yes, Well...

So I've been thinking about the pretty angry and rant-filled post I made earlier today.

I thought about deleting it, because really there isn't a point in insulting some random designer anonymously on the Internet. There are plenty of worthless arguments made every day, and a lot of them will be posted online. So why bother?

But I'm going to leave it up. In the spirit in which it was posted, there is plenty wrong with the world, there are not enough people trying to change it, and there are too many people setting up road blocks in the way of those who are trying to not call "Bullshit!" when some one perpetrates a big pile of it. Especially when they call it theory, and it is anything but.

If it was presented as "marketing", I wouldn't care. But because it was presented as design, and what's more, information design, it rubbed me raw. These are disciplines in which there are plenty of people doing good work, and good theory, both important to the evolution of life on this planet. That some jerk can waltz in with misinformation, crap theory, and start fucking it up is a travesty. There is way too much "don't hate the player, hate the game" in theory. The people who try and tell you that anecdotes replace good, solid critical thinking need to be run out of town on a rail.

However, I do want to give a full disclosure, and say precisely why I thought this bit of idiocy required me to be insulting and so sardonic.

One: I left my particular theory discipline for precisely this reason. There were plenty of good thinkers there, but (most) didn't care about the evolution of the theory more than they cared about egos and careers. So, the point is a little close to my emotions. Sue me.

Two: I have a particular disgust for "consultants", i.e. those who have some abstract knowledge about "business" or "marketing", and step into a production environment, not knowing anything about actual production (that is, MAKING THINGS) and try to control the situation. I have heard just about everyone of those 20 damn receivables in terms of "consulting", and they were all bullshit. If you have no ability or skill in MAKING THINGS, why don't you leave the strategy to people who do? There are plenty of smart people in production who can manage, strategize, design, and implement. I think the "proverb" is lead, follow, or get the hell out of the way. Again, an emotional topic.

Three: Diplomacy works when you are negotiating. Reasoned argument works when you are trying to convert somebody to a point of view. Ridicule and derision works when you are making an example of some thing or someone. I didn't necessarily want to trash that guys work. But I do want anyone who would listen to me in the slightest to think pretty damn hard about ever proceeding down a similar avenue. Politeness isn't necessarily a receivable for me. Progress and thought are.

So that's it. Reader beware.


ps. I simply found the article via Mssr. Sterling. I'm not sure under what auspices he posted it--there wasn't too much commentary. But even negative examples are good ones, so I just wanted to give credit for directing me towards it, for what it's worth.

Eichmann Design

So, what's the deal with Bruce Sterling anyway?

He certainly seems to have his hyperlinks in a lot of Internet pies.

I suppose if your credit in many projects is labeled as "____-visionary", then that sort of thing just spawns new links to things.

Also, he would appear to be just a good old-fashioned interesting guy (sort of a Man of Letters for the media age).

The point: here I am, again, posting about something he posted. (Though, let's just say it straight out--his blog is by far one of the best sources for "original" content posts. That is, I find many things from him first, and then see it re-posted other places. Sort of like the re-posting I'm committing right now.) A lot of hyperlinks--a lot of pies.

Anyway, the something is this thing - http://semanticstudios.com/publications/semantics/000228.php

I'm not going to give it a cleverly-named hyperlink. Frankly, I would rather not post about it at all. I even instituted my new "24-hour" policy, saving the link and waiting 24 hours (about a week in Internet time) to make sure I really had something worth saying before I posted.

And that asshole's picture on the linked page is still pissing me off.

Alright: so I don't know if he is an asshole. He could be a great guy. But because the content of the page is driving me crazy, I'm going to pretend he is a jerk, (and not say his name). Because that's what we do with "Little Eichmann's" right?

Whoa! Easy there, tiger! Why don't you just stick to insulting the guy personality?

Because this is micro-fascism, pure and simple.

I have nothing against product design, or information architecture. I am huge fans of both as disciplines, and am constantly amazed by the people who do this sort of work.

It is just this simple fact: the concepts that he proposes in outlining the conjunction of product design and information architecture, what he calls "experience design", go against everything I know about believe about theory and philosophy.

I'm not saying he's incorrect. I'm not saying he's inaccurate. He may describe the current field of design and information architecture better than anyone ever has. But what he is describing is against just about everything I know and believe.

Philosophy and modern critical theory, though often giving credit for bringing the relativism and anti-moralism of the post-modern age to the mainstream, are not laissez-faire disciplines. It is precisely the prescriptive nature of theory that rejects such dogmatic world-views as idealism and moralism. Of course, after the philosopher finishes his tirade about the errors of the world, if you ask him/her "so what do you actually believe?" you will probably get a good deal of specious circumambulation and general backpedalling. This does not mean s/he does not have an answer to the question, it simply means the answer is impossible without a lot of well-practiced specious circumambulation and general backpedalling.

The goal of thought is to understand and organize our consciousness of the material world, and by doing so, seek to improve it. Philosophers and theorists are working at it, and anyone can take a look at the world around them and see the need is dire. However, this doesn't make the work of thought any less difficult and time-consuming. Luckily, we've developed some good tools over the years. Philosophers have a pretty decent "means of production" at their disposal, and some of the best "thought designers" in history have contributed to our material work on thought and theory.

But this fellow ignores all of that. Rather than implementing a single feature of responsible "thought design" in his notions, the twenty "deliverables" mentioned are antithetical to any sort of critical thinking, most simply because they reject the actual "critical" element of thought and replace it with jargon, slogans, icons, and hype. This is the philosophy of capitalism at its best/worst--rather than use any means of thought, he has done an end-run around theory and formed himself twenty easily identifiable products, i.e. deliverables. He hasn't built these ideas in the time-tested practice of thought--he's simply gone out to the streets and picked some ideas that sounded like they would be popular. What part of design is designing popularity? Isn't the goal to design new things and improve upon old ones, thereby improving user experience? What in design theory says it is a good idea to take a product and repackage it into another product? The art of USB dongles? It's surplus-value at its least productive. These "deliverables" are the branding of thought into a product. Thought (TM): in easily consumable packages, ready to be bought and thrown away. And naturally, the folks in his comments section eat it up.

It would be one thing if he created these "deliverables" from nothing. But he committed the worst fallacy of ethical philosophy in designing these products, making them actually harmful, rather than benignly un-caloric. He starts with assumptions, and works backwards. (Milk is white/this milk is not white enough/let's add melamine.) It's worse than a fallacy actually, because after his flawed premise that "anything the use will buy is good design", he doesn't even work backward with logic. (Milk is white/this milk is not white enough/let's add lead.) These concepts don't rely so much on flawed logic, so much as no logic at all. They are, in a way, deductive paralogisms: he takes particular examples, which may or may not be true, and then falsely derrives inferences from them. Not so much an error as a complete lack of method entirely. They're not sound; they're not logical; all they seem to be is his particular view of the world--a world in which thought only exists in its ability to be sold to a consumer.

This could very well be a new stage of capitalism, rearing its head. Surplus-value, classically an abstraction of material circumstances used to siphon material production, has now leeched its way into the super-super-structure: our thought itself. It is wrecking the material logic of thought! It's siphoning off intellectual production! It is a contradiction of the material fundamentals of reason itself! And unlike some other materialists, I have no confidence that such contradictions will subsume themselves.

But enough of this abstraction. Let's look at the "deliverables" themselves, and try to figure out how he thought this might be a good way to think.

1. Stories. "A good story about a user's experience can help people to see the problem (or opportunity), motivate people to take action, and stick in people's memories long after we're gone." And a good joke can make you hungry. What the hell is this supposed to mean? Narratives are important to people, sure. But narratives sell wars! Narratives put people in camps! How about rather than tell people a story about how good a product is, we ask them to tell us a story about their ideal product? Then we can analyze the story, and see what their stories tell us about their needs and goals. Oh, but that might mean I have to listen, rather than just speak.

2. Proverbs. " High-concept pitches, generative analogies, and experience strategies invoke existing schemas to put the world in a wardrobe." Oh yes, for example: Arbeit Macht Frei. That was a brilliant proverb. Just take a look at his own sentence: it can mean whatever the hell you want. "We're all in this together", or "we need more lebensraum". What was it Bush said? "Fool me once..." yeah, exactly. Proverbs--the unequivocable enemies of logical thought.

3. Personas. "Portraits and profiles of user types (and their goals and behaviors) remind us all that "you are not the user" and serve as an invaluable compass for design and development." I think we normally call these stereotypes. Remember, the customer doesn't think like you--he is stupid, likes small form factors, and eats babies.

4. Scenarios. "Positioning personas in natural contexts gets us thinking about how a system fits the lives of real people." Because this obviously makes more sense than asking real people about their lives. I do most of my sociology through reality television now. Have you heard? Most of them are real people!

5. Content Inventories. "Reviewing and describing documents and objects is a prerequisite to effective structure and organization. The artifact (often a spreadsheet) is a sign of due diligence." Ask anybody in production how useful they find inventories to be. It wastes time to collect, wastes time to correct, and it even if its right, it tells you what is sitting right in front of you. And due diligence? I would think if this Year of Our Finacial Apocalypse taught us anything, it is that due diligence doesn't mean shit if you don't know how to count.

6. Analytics. "We learn by wallowing in interaction, search, and navigation data. And, we teach by uncovering and charting the most pivotal landmarks, portals, paths, and patterns." Isn't the proverb: 'we learn by doing'? Remember: economists don't make the economy, they just try to understand it six months later. Now, analysis would be something: critical investigation of method and process. If you don't use the right metric, data is nothing. (See above comment about due diligence).

7. User Surveys. "Asking the same questions of many users across multiple audiences can reveal existing gaps and common needs, and show how they map to customer satisfaction." This is a deliverable? Really? A Survey? Well, it worked great for No Child Left Behind. Bring on the standardization!

8. Concept Maps. "In the territory of concepts, a good map can help us see where we are and decide what to do by establishing landmarks, clarifying relationships, and identifying true north." This makes me want to vomit. While its not overt fascism, the marketing-consultant schmaltz of it is still killing me inside. How about instead, "what is a good map?" That would be an actual, interesting concept. This guy's amazing theoretical mind is telling me what a map is for. Is he getting paid for this?

9. System Maps. "A visual representation of objects and relationships within a system can aid understanding and finding for both stakeholders and users. Shift gears from "as-is" to "to-be" and you have a blueprint for structural redesign." Wait a minute, he just copied #8! But seriously: one of the biggest swindles of the last Bush administration was the re-designed concept maps of DHS after Katrina. Look it up. They gutted the organization with a rusty, bureaucratic sword. You couldn't even find an email address, a PO box, or a web page by the time they were done with their system maps. This is a literally recipe for disaster.

10. Process Flows. "How do users move through a system? How can we improve these flows? A symbolic depiction can enlighten desire lines and show the benefits of (less) chosen paths." Three deliverables in a row have told me to draw a picture. Next he'll be telling me to make a Powerpoint. First Chekoslovakia, then Poland!

11. Wireframes. "Sketches of pages and screens can focus us on structure, organization, navigation, and interaction before investing time and attention in color, typography, and image."
Is this a sketch, a flow, a diagram, or a map? What's the difference? Those would be actually thought-provoking questions.

12. Storyboards. "A series of sketches with narrative displayed in sequence can tell a story and paint a picture by showing interaction between users and systems in context over time." See what he did here? He combined "stories" with "pictures"! Next stop, rhyming lyrics, and cats in hats!

13. Concept Designs. "Interface designs and composite art invoke an emotional response and capture people's attention by presenting a high-fidelity image of how the product could look." He seems to have made good use of this one with his own "receivables". Too bad most concepts never make it to production. I wonder why?

14. Prototypes. "From paper prototypes to pre-alpha software and hardware, working models drive rapid iteration and emotional engagement by showing how a product will look and feel."
Doesn't this refute the previous? What good is a concept if you can't get it to work? Also: it seems that he stopped short of a prototype for his own receivables. I can tell from the concept that it wouldn't work.

15. Narrative Reports. "Writing is a great tool for thinking and organizing. And, it's hard to beat a written report for presenting detailed results and analysis or formal recommendations. Reports can serve as a container for most other deliverables." Well said. Too bad he didn't follow his own advice. But wait: he never specified that the writing be good, or make sense! All style, no substance. No wonder they call it "Intelligent Design"!

16. Presentations. "As the lingua franca of business, slideshows (and videos) can be great for telling a story or painting a picture. They can also be dead boring, unless you present in person, hit the highlights, and beware the bullets. Presentations can serve as a container for most other deliverables." Thank goodness! I thought we'd miss Powerpoint! Powerpoint: your most boring ideas, presented in a font and color of your choice. Now, with Clip Art!

17. Plans. "Project plans, roadmaps, and schedules guide design and development activity by clarifying roles and responsibilities." Could have used one of these for the Iraq Occupation. Note to self: come up with one for the Economic Bailout. Point: even bad plans are plans. Furthermore: even terribly destructive plans are plans.

18. Specifications. "An explicit set of requirements describing the behavior or function of a system is often a necessary element in the transition from design to development." Fuck! Enough of these sorts of "deliverables", and I'll drive my car off a bridge! Sure, this one is meaningless jargon, but it's also the exactly backwards to how design should work! Should you be developing function FIRST? Great product! Now: what should we make it do? WHO THINKS THIS WAY?

19. Style Guides. "A manual that defines a set of standards for identity, design, and writing can promote clarity and consistency." Yeah, it was the consistent font and margins that really pulled this set of deliverables together. Maybe what it needs next is a good logo, or a symbol. Maybe on a flag? Then we can have people salute that flag! If they don't we'll shoot them! Clear, and consistent!

20. Design Patterns. "A pattern library that shows repeatable solutions to common problems can describe best practices, encourage sharing and reuse, and promote consistency." Much of these ideas, in the most generous of terms, seem loosely organized by the principle of designing a product for a particular situation and audience. Now, he's telling us to to the reverse. he told us to use stereotypes, then he told us to do surveys. He told us to sell the idea, then he told us to listen to the customer. This is the idiotic championing of the contradictory schemas, which you may remember from such wonderful methodologies as State Communism and Conservatism. Give them a couple of ideas, and don't worry if they contradict each other. Just yell them over and over, and the whole crowd will yell with you. Repetition is the key--substance is meaningless.



Okay, so perhaps I was a little harsh: not every one of these ideas is fascist. But none of them are smart, well-described, or planned in any sort of theory that could be termed "progressive, useful, rational thought". And, well, some of them are sort fascist--particularly 1-4, and maybe 6 and 19. This is the point of the "little-Eichmann", you don't have to wave the flag, you just enable the people who do. If design, especially information design, doesn't think that enabling harmful thought processes is its problem, it has another thing coming. Even if your goal is to make money, which I totally understand, debilititating the customers, misunderstanding their needs, and designing useless crap as a result is hardly a sustainable way of doing it. Think about any time-tested champion of design. Now, think about how many of them focused on idiotic and illogical "receivables" like these. Maybe they used parts of these concepts, somewhere, somehow (with such broad, nebulously defined ideas it would be tough to avoid resembling some element of it), but the core of their design principles involved actual creative thought. Don't just take it from me, the materialist philosopher. Ask an actual designer.

There is a reason that philosophy is still around. It's in the business of designing ideas--and this is as self-referential a design process if ever there was. Philosophy knows that good thought is not going to please the customer all the time. But eventually, good results trumps any number of stories, presentation and stylesheets.

1/28/2009

I've been Sterlingged

As a study in the weirdness of the internet, and the small (but steady!) readership of this blog, here is my analytics chart for last week, when Bruce Sterling re-posted a picture from my blog post about the SF Museum and Hall of Fame in Seattle.



370-some posts in one day was a pretty big bump for lil' old Interdome.

Also, he and Rudy Rucker have a new short story in Asimov's about the end of the world visualized through the perspectives of two hardcore bloggers in the not-so-distant-future. The story is great; those guys are awesome. Any blogger should get a kick out of it, even the not-SF-and-techno-jargon inclined.


Also, if you examine the chart above, you will notice that most of my hits from search engines are drawn to my pictures of "Greek animals". This has been, consistently, the biggest single draw to the site since I published that post back in 10/07. Again, the Internet is a weird thing.

Game Theory

So this post has been making the rounds: in which Steven Johnson, guest-blogger on BoingBoing, levies a barbed persecution of Candy Land and other childhood games of chance rather than skill.

Which, of course, he is right to do. And, the commentators that put him in his place for basing his critique on his 2-to-7 year-olds' experience of a game meant for the pre-reading age, for whom learning how to play board games and take turns is as much of important lesson as decision-making, are equally right.

I think the answer we can walk away with, happy that we have successfully mediated this potentially violent board-game schism, is that we can all learn something for anything. Hossanah.

What I want to write about here is not "healing the wounds", "bridging the gap" or "Peanut Brittle Houses". I want to write about blow the crap out of your siblings with the biggest pieces of military hardware known to man.

Battleship! (the MLA has decreed that it should always be written with the exclamation point) is another game that Johnson takes to task. He realizes just how boring it is when explaining the rules to his kids:

" I spend thirty minutes setting up the game, explaining the dual grids and how one represents their fleet, and the other represents their opponents’. I have to explain the pegs, and the x/y coordinates of the grid, and the placement of the ships themselves. And then when we’re finally ready to go, I explain how the actual game is played.“

So pick a random point on the grid,” I explain, “and see if he’s got a ship there.”

“Nothing? Okay, now you pick a random point on the grid.”

“Nothing? Okay, let’s do it again…”

I hadn’t thought about this until I actually played the game again last week, but there is absolutely nothing about the initial exploratory sequence of Battleship that requires anything resembling a genuine decision. It is a roulette wheel. A random number generator could easily stay competitive for the first half. But even when some red pegs appear on the board, the decision tree is still a joke: “Now select a co-ordinate that’s next to the red peg.” That’s pretty much it. Yes, at the very end, you might adjust your picks based on your knowledge of which ships you’ve sunk. But for the most part, it’s about as mentally challenging as playing Bingo.
"

And again, he's right. The game itself is nothing more than a point of reference, so that later on in life when you learn about scatter plots, you can say to yourself, 'oh yeah, the Battleship! graphs.'

Which is precisely my point, though idiotically described: the point of games is much more than the rules of game play itself—the game is everything that is not the rules.

This could be taken as meaning, 'the point of games is not the game--it's how much fun you're having while you play the game.' Which is true, but I'm talking about other specifics. For example, as others commented in the original post: learning about Cartesian Coordinate systems. It is a side effect of the rules of the game. Nobody plays a game to actually complete the goal—you haven't actually sunk any battleships, settled Catan, or raced Light Cycles to certain death (or have you?)—you've played a game themed around the imaginary act of doing these things. In other words, the point of the game is not the game, it's the "playing". If it is fun, challenging, or a nice past-time then you play it. If it's not, you go do something else.

Which brings me to Battleship! My brother and I played many different games when we were young. Battleship! was always a good one, and we still play occasionally—I believe the last time was a few years ago. Our enjoyment of the game was on many levels. For one, we were (and in some ways still are) pre-pubescent boys, who played war in various forms, and liked simulated combat and military technology alike. A computer game that got a lot of play when we were younger (and it was still current) was Task Force 1942, a fleet combat simulator, which had its biggest points of real “action” in the gun view, where you could actually hit the space bar to bang away at pixelated ships on the horizon. Maybe we just liked boring games. We also liked the movie, Sink the Bismark! which is similarly a boring film, though basically historically accurate. We actually used to act out the gunnery officers screaming into their phones, "Foy-are!" (the German officer) and "Shoot!" (the British) while we played Battleship! This must have driven my parents nuts, almost as much as the viewer of the movie, forced to watch the exact same clips of the gunnery officers about three thousands times each in the course of the film. I was always the German—I liked the enunciation better. My brother always put a premium on being the good guys as well—I was, at various times, German, Russian, alien, and Confederate. It just didn't really matter to me. I was a fatalist from a young age.

But here's why I really liked Battleship!: I could win every time. Okay, not every time, but much more often than a Game Theory optimization of strategies would indicate. And this is no slight on my brother's analytic abilities. He was always more math-oriented than I, quicker at sums, and good at grasping statistics and logic. I was a bit better at visual geometry and calculus (though he may disagree), but he was quickly able to see the strategy in scattering shots across the grid in the checkered pattern, optimizing the ship-sized area covered with the fewest number of shots, even at a young age. After he caught up with me on the math, we were back to 50-50, at quits.

However, I gained the advantage when I realized the flaw in Game Theory—there is a way to beat the statistics, and it works every time. Cheat. And I don't mean peeking at his board, I mean cheat on what is the boundary of the “game” in one's strategy. Game Theory is based upon a clearly-defined game, in which there are a set number of moves. The way to beat the theory is to find the one move that hasn't been accounted for. Game Theory was invented by scientists with very little imagination (outside of their field, that is), and they always miss something. Want to know how to overcome a Checkmate? Throw the board on the floor. You're opponent can't win the game if you never concede to lose. Of course, this means they will probably never play with you again, but you'll never lose to them again either.


This “cheating” widens the scope of what is important to the game beyond your opponents view. Nowhere in the rules to Battleship! does it mention strategy. So what is the strategy? What are its boundaries? What are good counter-strategies? If I was thinking about this, and my brother was not, it could be to my advantage. This is why cheating is cheating—because you have extended the scope of the game beyond a defined boundary of what the game is supposed to be. I've played games of Monopoly with friends in which cheating is not a vague possibility, but assumed. The point of the game is now to cheat biggest, and best, without getting caught. A game can have any boundaries, or no boundaries, depending on who is playing what, when, and where.

This is why the Prisoner's Dilemma is bullshit. The boundaries are completely arbitrary and illogical. Any real criminal already has an unspoken death-pact in place to prevent such a dilemma from ever occurring. If one of them confesses, then his/her life is forfeit, and s/he can expect to be murdered before testifying, or soon thereafter. Why is that not included in the “dilemma”? And even under the outcome of confession (with a threat of murder or not), the real game is already over—the game of criminality. The game of criminality is that you have chosen to break the societal rules that incorporate such things as loyalty and defection. Therefore, if you attempt to take on a partner, you are opening yourself up to the possibility of betrayal. This is a social game played before the crime in even committed, and the prison cell is only the last move of an even bigger game. The Dilemma is meaningless, because the moves were decided in the context of a game much larger than extended or decreased sentences.

The same thing goes for Battleship!, even though I had no knowledge of Game Theory when I first discovered the fact. A random shot pattern makes the most sense if the placement of the ships is random--but it never is! The ships are placed by a person, not a computer or statistics theorist. Knowing your opponent is the battle; the ships are only the last move. Because of this I could confidently win at Battleship!, much as I confidently win at Rock/Paper/Scissor with people I know. I can see in my wife's face if she is going to go to Rock. The confidence says, “solid fist” all over it. After that, Paper is next in line. Scissor is the hardest gesture to make, so you can assume that it won't be chosen on the first round, unless you know a person really likes Scissor. Of course, if there is a tie the first round, they will probably choose Scissor next, to try and not have a tie the next round. They know, sub-consciously, it is the least picked. But why am I giving away all my secrets? Back to Battleship!

I knew my brother liked his fleet to sail across his board in an orderly, real-life pattern. But, he would not be so foolish as to place them all in the same direction. That just screams “predictable”, whether I could predict that or not. So, at least one ship, probably two, would be facing orthogonally to the other ships, looking something like my spreadsheet diagram, to the right.

Once I hit a ship, I would have a pretty good idea where to look for the others by assuming such a distribution. If I guessed randomly, I would have only a random chance. This was only the beginning of my inclusion of "other rules" to my game. Another good rule is: if he is going to spread his ships in a pattern, then it is likely that one ship will be against the side of the board. This isn't a good place to start, because is often the smallest ship that gets stuck there, being placed last. But, if you are missing one ship and haven't hit one against the side, that's a prime place to look.

Looking at his face was another major part of my play. Any shot, miss or hit, would indicate whether I was a close call or far off in his facial expression. His poker face got better as we got older, but there were ways to bring it back out. Taking "irrational" shots would help. Say I had three hits on a ship, but no sink yet. An ordinary player would close in and sink. But, I would start shooting along my random pattern again. The relief that this would bring would put his guard down; if he showed utter relief then my random shot had gone wide, but if he stayed tense then I had another ship on the "scope". If, at this point, I managed to score a hit, and have two ships burning at the same time, the pressure would be too much, and I could read him like a map.

Then there were the counter-strategies. I remember the first time I taught him the trick of grouping ships.

The look on his face as he banged away at all that open ocean while his flaming hulks vanished beneath the waves was victory itself. Of course, he eventually hit my ships along the bottom, and was even more distraught when I refused to admit that my aircraft carrier was sunk after he had hit five ship squares in a row. He never forgot that lesson. Afterwards, I could expect that two ships would be sailing in classic "Pearl Harbor Anti-Sabotage" formation (that is, close together to minimize exposure). Of course, I then re-taught him Pearl Harbor's lesson, when my near misses could easily catch a second ship unaware. He wouldn't over-commit with that strategy—but then again, neither could I.

Another way I would try and confuse him and reduce "open ocean" exposure to random shot patterns is by sailing end to end.

This leaves lots of open space for him to shoot around, and even if he stumbles across half my fleet and figures out my nefarious strategy, the other half is still out there. And again, there is the psychological effect of having him make seven hits in a row without a sunken ship.

Once these strategies became known, I could expect them to be used back at me, and expect them I did. By looking at his face while he placed his ships, I could judge what sort of strategy him might be using. If he had a crafty look, he was probably trying something unconventional, like a close grouping. If he placed and re-placed, then it was probably a wider format, and he was making sure they were all spaced evenly.

Even after we, the two finest masters of naval warfare on my grandparents tile floor, had played through these different "mini-games", we would eventually grow bored with our tactics, and seek to improve the game in some way. This, I firmly believe, is the corner stone of any creative child's board game career. It doesn't matter how formulaic, random, or "zero-sum" a game may be if the players can re-invent the game themselves. We developed a system of moving the ships. I don't remember exactly how it worked, but I think it was as simple as you can move one ship one space per turn, or "maneuver", in which you can turn the ship 90 degrees using the stern as an axis. This makes the game much more than random—now the destroyer is at a distinct advantage, because guided correctly, it's small size could possibly evade being sunk even after taking a hit. It also made each ship a dynamic object with a "bow" and "stern", because this would deeply affect its pattern of travel. After hitting a ship, you had to figure out which end was the front and the back if you wanted to track it. Our navies cartwheeled, flanked, and full-stopped across that ten-grid sea.

We really wanted to introduce torpedo barrages and find a way to "Cross the T", but we never found a good way. I suppose a board does have its limits. Though not always: one summer my brother, my two cousins and myself set about designing a custom Risk board (which I believe is still in my parent's basement). We re-distributed continental boundaries, bumped the number of countries up to 150 or so, added new transfer routes across bodies of water, increased the number of dice to something insane, like 10, (though I can't remember why—maybe to make the game go faster?) and instituted sea convoy re-supply rules, in addition to the standard amount of treaties, doctrines, safe passage agreements, and cold war borders. I never could win at Risk. I think it was because I just wasn't comfortable sitting on the defensive. I would always go rogue, re-neg on all treaties, and rush my armies into a neighboring continent. This would give all the other players a reason to team up against me and force me out of the game, which is pretty much what the game is. It also didn't help me that the dice rules give the benefit of a tie to the defense, making the “charge” a lot less effective. I remember in college, when we had some heady, multi-day Risk offensives with troop numbers climbing into the hundreds and thousands, I wrote a simple program on my TI-83 to calculate troop losses according to the dice rolls. The program provided the option to go “to the finish”—in other words, continue to attack until one side was defeated. I was accused of faulty programming because the defense would always triumph. It was the math, I argued. With such a small number of troops at risk in any one “combat”, and with such a large number of iterations due to the large total of troops in the battle, the defense would always come out ahead. Now that I think about it, it also could have been due to the fact that the basic random number generator on the TI-83 was, in fact, too random; providing none of those “streaks” of good rolls that ordinarily allow a player to beat the odds. A rule to trump this could be added. The rules of Risk allow an attacking army to roll one die (up to three total) for each army unit attacking, as long as the defender has a sufficient number of army units to counter each attacker. But when the numbers are very large the attacker could be allowed to back each die by as many men as wished, and allow as many dice as desired. It could be explained in terms of attack fronts—the main front contains 10 units, and there are two flanking attacks of 3 units each. This way, the attacking player can determine his/her risk by choice, rather than leaving it up the accumulation of random outcomes. Of course, in large iterations the risk is the same, but the ability to create massive attacks gives the player the ability feel that s/he is overcoming attrition. And after all, isn't that the point of war?

I digress, but at the same time I think I make my point. A game is hardly the sum of its rules, or the spots on a die. A game is the social event in which one or more humans age 0 and up agree to a set of rules as a foundation of their interaction. During this time their may be competition, cooperation, challenges, creativity, boredom, aggressiveness, and even, depending on the game, sexuality. Sit in a popular bar one night, and try to develop the rules of human sex. It's pretty obvious--most people play by the same rules. I bet you can even create a comprehensive list for governing flirting in less that ten rules, in most cases. Of course, the real challenge, as with most games, is defining where the edge of the board is, and where the rules stop applying.

1/27/2009

The Rabbit Hole of Ambition

So I was kind of stressing about Rabbit Hole Day. It sounded interesting and fun to blog with a Halloween costume. But, as the day loomed, I was having trouble finding a project. Anything that I could blog about with any real intensity, I pretty much already do. I thought about masquerading as a gadget blogger, or a hardcore liberal political blogger, or a philosophy blogger, but I kind of do these things already, albeit in my own way. I thought about making up a blog post like a teenager's MySpace page, but did that satire already with my own MySpace page (which unfortunately seems to be lost to the depths).

Then I read Cory Doctorow's Rabbit Hole post on BoingBoing, (where you can find out what it is, if you haven't Googled it already) and thought that was the perfect idea. Safe, perhaps; but at the same time, isn't revealing details about oneself that one never would a perfect mask to hide behind? Hiding behind the facade of the truth, which is essence is supposed to be a fundamental element of blogging: millions of anonymous strangers revealing details about their life to the world, which they wouldn't ordinarily tell their own mother. I've always taken the opposite approach in my online writing. I have never used a pseudonym or screen name, and only write things that I am prepared to represent as written and published by myself. So, now I am going to take the opportunity to cut in a completely different direction and tell "you" all about a part of myself. (This also fulfills half of my promise regarding my Blog Poll; I received two responses, so I must blog about the chosen "favorite topics" at somewhere near half-quality. Paula responded that "she doesn't like math"--I still haven't figured out how I'm going to blog about that. But Betabug, kind fellow that he is, chose to (perhaps lie) and say that he reads because of me, even though he doesn't know me. So, this post is about me! And maybe only halfway-readable!)

I'm going to write about my writing. Not my blogging here, that is. I mean, write about the fact that I want to be a writer. I am one of those people. To prove it, here is a list of asshole-wannabe-writer things that I have done recently:

-I have been non-apologetically expressive about my personal coffee fetish

-I have read pretentious literature and compared my own work to it

-I have struggled to write short stories about an author struggling to write

-I have "worked on a novel"

-I have felt a deep sense of annoyance at a rejection letter, "because my work is better than everything those assholes print"

-I have used a typewriter

-I have written in a Moleskine notebook, possibly while sitting in a coffee shop

-I have finished writing in a Moleskine notebook and looked up to see another person at the same coffee shop writing in a similar notebook and thought about how they are probably not as good of a writer as myself

Quite guilty. But of course, you knew I was a jerk, so why would anything change once I picked up the pen? So let's not talk about that. Instead, I'd like to share more about my writing and my writing process, for no other reason than that I have never done so before.

Originally I thought that I wanted to write SF. I even started, a couple of summers ago, writing a SF noir about a cyborg-alien detective investigating a strange streak of "psycho-terrorism" on a planet that's entire economy was based on tourism. It was called "Vacation Planet"--all my working titles are base and simply self-descriptive. The character, by name of Enoch (which has been the name of at least three separate potential characters since) finds the lush resort destination disconcerting after traveling from his own desert planet at the expense of the Tourist Board. He tries the vacation lifestyle, meets the underground resistance organization of the service class of the planet, and eventually determines that it is not terrorism, but a self-induced psychosis brought on by living in a world that is a constant buffet line for outsiders. I had the idea while visiting my brother in Orlando. While my plane was landing, I saw strip mall, hotels, and amusement complexes and their infrastructures all the way to the horizon. I imagined what it would be like if I had to live there. It was such a full, bright city, but empty in a way that only Disney World's Main Street USA can be.

I worked on a few other SF projects, only one of which ended up finished. I found my attention mainly drawn to the brainstorming. Simply writing out my idea after all the fun of thinking it up was a bit of a chore. When it came to the actual writing, my interests were leaning towards a speculation of prose, rather than of concepts. To speculate with literature itself, I found it more helpful to stay with the most basic plots. Naturally one can do both; but I'm only a beginner, after all. Here are some examples: a derelict of a man eats a sandwich on a subway, disgusting his fellow passengers. A man dies, and finds that the afterlife consists of walking over a bridge. A middle school boy listens to his peers, perhaps more closely than anyone of that age should. These are ideas that are not really original or enlightening in themselves, but I try to think about a way that I could tell them so that the mundane appears exceptional. I like to think that this is my skill: what I am able to do with words. Perhaps I succeed, and perhaps I don't.

I think that I do succeed. I feel that my writing is not just good, or readable, but I think that it shares something new and creative, in a way that might be called art. I write a lot on this blog about what I think "creativity" is, or what "literature" is, but in truth, I don't think of my own work in any terms other than "worthwhile" or "worthless" once it is sitting on the page. Most of it does seem "worthwhile", and so I keep making it.

I have to, in a way. If I end a week with less than 20 new pages (pages that I am happy with), I feel anxious and disturbed. After work, if Megan is working, I might be able to get 3-5 pages, if I'm not too tired. Then I try and set one weekend night aside for writing; I nap in the afternoon, and then begin to up my caffeine level. If I can write for 8-10 hours (normally resulting in anywhere from 5-20 pages), I consider it a success. I normally shoot for Friday nights. If I come up empty on Friday, then the pressure is higher for Saturday, and my anxiety builds. The anxiety isn't bad in itself--it can act like a wave, pushing me once I start, and keeping me going rather than letting my mind head off towards the Internet, video games, too much alcohol (during the writing) or other things. But when I'm sitting with a blinking cursor, or trying to "have fun" knowing that I've only written 3 pages in the last 10 days, the anxiety can be a lot less than comfortable. I haven't really analyzed this or tracked it down in my psyche. I think I'd rather not, at least for now. I consider myself lucky to have it. Tomorrow I could be old, and content with my day job, and willing to just push that anxiety back and let life take its course. Psychoanalyst's translate it as "drive" for a reason, I think.

To be completely honest, I do hope to be a well-known writer some day. Not famous, necessarily. I wish for these two things, specifically: one, to make enough money with the writing that I like, so that I may live comfortably and concentrate solely on my own projects; and two, to have enough of a following in readership that I can easily be aware that others care for and appreciate what I write. I don't think these are selfish or unreasonable goals. Of course, the publishing industry isn't really a well of hope right now. I've thought about the possibility that almost all writers would have to work for free (or almost so) in the near future. I think I would be okay with that, though it would be fun to live like Hemingway or Capote. But weren't they both independently wealthy before they were authors?

State of the Author: thus far, I haven't had anything published (not counting self-publishing). I have two pieces submitted, I have been asked to read at a literary reading in April, and I put a piece on Authonomy recently. As part of my "let's figure out where we're at" process, I made this tally last week:

Finished Work: seven (one novella, four short stories, an essay, and... err, Punk song-lyrics)

In Editing: five (one novella and four short stories)

Writing/Stalled: four (one novella, two short stories, and a poem. Novella is almost finished, the rest might never be, though the poem is close)

In the Aether: four (two novels, one short story, and one essay)

Even More Vague: I have a shit load of ideas scribbled down. Some of them are really good. Others are really funny. A few are illegible. At least one is probably really offensive, and I would tear it out of my notebook except that I can't bring myself to do it because that would make me feel even more guilty. My plan is that eventually all the unused ideas will be compiled into an epic poem of sorts. I write my notes in complete sentences or at least phrases (I always have) so it actually reads pretty well, if a bit esoterically.

I think one of the novels in "the Aether" could actually be something. This is what I'm working on now. My plan is to finish all the short stories that can be finished, bomb the shit out of the journals, and write the damn novel. I'm literally bursting with ideas for the novel, so I think it's going to be good. I'm actually worried that I'll ever come up with something to match it once I'm done. Of course, this could all change tomorrow, and it could go back into the stack.

I hate, absolutely hate, submitting my work. The entire process is so antithetical to why I write. Of course, I write largely to fulfill my inner desire, so getting rejection letters is obviously antithetical to my ego's pursuits. But I wouldn't mind being rejected if I didn't have to wait five months to get a form letter. It's disgusting--everybody bitches about how literature has gone to shit, and then they treat potential authors this way. Naturally, most potential authors aren't about to be the saviors of literature. But it certainly seems to be an odd system of improvement and support. Imagine if education was run in the same way--you work hard at some abstract task for nine months, turn in your work, and then wait for months to receive a form with almost no feedback except for "yay" or "nay", which itself is largely subjective or at least meaningless to a point where it might as well be. Oh, wait... that is how education works! Well, no wonder people like TV. Television is always there and loves you just as much, no matter what you watch or for how long.

Well, so there it is: Adam Rothstein, the hopeful writer. Probably not too different from most hopeful writers, but hey, I'm me, goddamn it!

I don't discuss my writing on my blog because my blog is not about my writing--or at least not about that writing. It's nice to keep them separate. This blog is a publishing tool: a portal to a certain audience to whom I write in a consistent voice more my own conversational tone than anything else. My subjects are things that I am interested in, and about which I choose to comment. But, this does not really include myself, or the things that I write about when I write. When I write off-blog, I am crafting individual pieces, works that would stand alone unchained into the subjective network of the Internet. I am in those individual works, but as "author", not specifically as the narrator. My blog is more of a pipeline that I log into, after which I begin to transmit straight from my own mind. Therefore, I let Author Adam do his own pretentious work, and here Writer Adam just tells it like it is for the peoples. They interact, of course. But form is as important as substance, and the Author in me knows his form, and the Writer in me knows his as well.

Have you enjoyed hearing about Adam the Author? I tried to tell you about him as truthfully as possible, though I am certainly a bit biased. It was a bit of a release to just talk about him freely, in this blog form. Certainly different than my usual rantings and ravings about the economy and technology. If you ever do want to read some of his work, you can always check out Brute Press, where he publishes online. I'll update you about him from time to time, of course.

Okay, I'm going to cut it here. I seem to have slipped into a disconcerting third person, which has some humorous, ironic possibilities, but also is starting to allude to the fact that this split-writer personality internet-publishing-form thing might have more uncanny consequences than we'd all like.

Happy Rabbit Hole Day!

1/20/2009

TARPs and MREs

Seeing as how the big news tomorrow will be the financial stimulus, I wanted to re-post this blog post from Robert Reich (Secretary of Labor under Clinton, all around economics guy, and adviser [I believe un-official] to Obama).

I'm going to intersperse my sardonic comments. To sum it all up, I think Reich gives some clear, albeit obvious guidelines on what should happen to any further money, so that it does not end up like the TARP. He is phrasing his comments in terms of the second half of the TARP funds, but I think it should apply to anything applied financial-wise.

(snip)

What Should Be Done With The Next $350 Billion of Taxpayer Bailout Money: Criteria for TARP II

It's difficult to make the case that the first $350 billion bailout of Wall Street -- so-called "TARP I" -- fulfilled its goals, unless one argues that the Street would have imploded without it, which is pretty much what Hank Paulson is saying these days. And since it's impossible to prove a counter-factual, especially when the Treasury was never clear about TARP I's goals to begin with, Paulson may have a point. But the easier and probably more correct argument is that American taxpayers wasted $350 billion. [seems to me that someone was arguing that this would happen back in the fall!] No one knows exactly where it went -- at least two recent reports reveal that the Treasury had no idea [he's talking about the "capital purchase program" as listed in the chart in my earlier post, which 'only' lost 18%]-- but we do know the money did not go to small businesses, struggling homeowners, students, or anyone else needing credit, which was the major public justification for the bailout. In all likelihood, on the basis of the skimpy evidence we now have, the money went instead to bank shareholders in the form of dividends; to bank executives, traders, and directors as compensation (directors of major Wall Street banks continued to pull down an average of $350K each in 2008 merely for sitting in on a handful of board meetings at which they obviously didn't oversee very much); to some holders of bank debt; and to platoons of lawyers, accountants, and other financiers who have advised the banks about other places to park the rest of the money in the meantime.

Congress is now about to give the next Treasury secretary an additional $350 billion, [or $800B, or make a bad bank, or whatever] as the second tranche of the bailout. One hopes that the new administration will use it better. Some suggested guidelines:

1. Do not use any of the money to buy stock in -- that is, to "recapitalize" -- the banks. This is a sinkhole of cosmic proportion. Citigroup, to take but one example, has so far received $45 billion of taxpayer cash since early October (along with some $250 billion in taxpayer-supported guarantees from the Fed for junky assets on Citi's balance sheets), and is in far worse financial shape than it was three months ago. Perhaps, someday over the rainbow, these shares in Citi along with Citi's lousy assets will be worth more than taxpayers paid for them. But we're not in Wonderland yet and probably never will be. Giving Citi or any other big bank more taxpayer money is analogous to giving it to Bernard Madoff. It's a giant Ponzi scheme. The money will disappear. [yes! this is what I, and those other economists I posted about are saying! BUYING STOCK IN A FAILING COMPANY HELPS NOBODY]

2. Do not use the money to buy the banks' "troubled" assets. This might have made sense a year ago when the proportion of such assets -- which include mortage-backed securities as well as loans to private-equity partnerships that pissed them away -- was relatively small. But these days a huge and growing proportion of bank assets are "troubled." (It's also a huge waste of taxpayer dollars for the Fed to exchange them for Treasury bills.)

3. Prohibit any bank that gets TARP II funds from issuing dividends, purchasing other companies, or paying off creditors.

4. Bar any bank that gets TARP II funds from paying its executives, traders, or directors more than 10 percent of what they received in 2007.

5. Require that any bank getting TARP II funds be reimbursed by its executives, traders, and directors 50 percent of whatever amounts they were compensated in 2005, 2006, 2007, and 2008. This compensation was, after all, based on false premises and fraudulant assertions, and on balance sheets that hid the true extent of these banks' risks and liabilities.

6. Insist that at least 90 percent of the TARP II money be used for new bank loans. If the banks cannot find suitable borrowers, they should return the money. [I think #3, 4, 5, and 6 should go without saying, if they get anything at all]

You may judge these conditions harsh. I think them prudent. They may force a number of big banks to go into chapter 11 bankruptcy, which would not be the end of the world but perhaps the beginning. [it is absolutely ridiculous that people seem to think we should have a banking crisis in which everyone gets to stay in business after they caused one of the largest asset bubbles in history] At least then we'd find out what was on their balance sheets, because they'd have no choice but to sell off some of their junk, even at fire-sale prices (believe me, if the price is low enough, there are investors around the world who will buy them); they'd have to negotiate with their creditors and pay some of them off; many of their CEOs would be fired and directors replaced, which they should have been already; and most of their shareholders would be wiped out, which is unfortunate for them but, hey, they took the risk. In other words, these provisions would force the banks to clean up their balance sheets. This is the only way to get them to start lending again. [or, maybe "somebody" should "take control" of these banks to force these things to happen.]

Meanwhile, Congress should attach to TARP II -- or to the upcoming stimulus bill -- a small change in the bankruptcy law allowing homeowners to renegotiate their mortgages on their primary residences (as owners of second homes and commercial real estate can already do). The practical effect will be to give homeowners more bargaining leverage with their mortgage banks, and save at least 800,000 homes from foreclosure. Yes, in theory, holders of mortgage-backed securities will take a hit but as a practical matter they've already taken a hit because the securities (and the securities in which they're wrapped) are already deemed to be junk. At the least, this change will put a bit of a damper on the rising number of foreclosures. A home that's occupied by a family paying something on their mortgage is far better than a home that's empty, on which no one is paying anything.

(snip)

The only problem is that this week Reich has taken a step back from the sterness of this post. Perhaps with the stimulus bill looming, or for some other reason (note in this one where he mentions that Obama's officials are nodding their heads at the bad bank), he is trying to be more diplomatic and picking his battles.

But one thing is perfectly clear: TARP was a lesson in exactly what not to do. The money vanished, and shit is still broken. No more buying assets, no more buying stock, no more cushioning balance sheets so that investors can get the hell out. It's time to shut the doors, and open the books. Somebody needs to take control, and it sure as hell isn't these bozos.

Trust me--time is running out here. The stimulus isn't going to come in time, not before things get bad. I believe the ticker symbol you are looking for is MRE.

1/19/2009

Boop Bleep

I shared this on my Reader page (see sidebar below) but this deserves its own post.


Bars & Tones from André F. Chocron on Vimeo.

It's 60 seconds long, and it got me smiling on a Monday because this is how I feel, in my most tiredest, blankest, Michel Gondry depressed/happy-go-luckiness. Maybe today won't be so bad after all.


You could also imagine, as I did, Bjork explaining in her most stoned, broken icelandic english, how "some times, when I watch the TV when it only shows the bars, I think about all the people who live in the bars, and how that I ask them to always be singing to me, because the singing from the bars sounds magical when it's on the TV." 'Cause that also made me smile.

Also thanks to Warren Ellis for posting it. He's kind of a weirdo, but he posts some good stuff.

Another Day Another Subsidy

Economy and Nationalization, quickly:

The reporting of the Congressional Budget Office on the "subsidy cost" of the TARP is been widely reported, but I pulled this and the accompanying quote off of FT Alphaville because of the lovely juxtaposition.

Remember how the TARP funds were supposed to be an investment for taxpayers? Well, the CBO has reported, despite general unwillingness to provide any sort of specific information about the TARP, exactly how well this investment has played.


"Amount" is how much the assets cost when the US bought them. The "subsidy" amount is the difference between how much the assets were bought for, and how much they are worth now. In other words, this is the amount of money that was just given away. The subsidy rate is the percent rate that we would be looking at if this was actually some sort of investment, and not a cash give away.

American Taxpayers made -26% over about one quarter. That's $64 Billion just thrown into the void.

And, the funny part is (because there is a funny part, right?) that even after doing this, all these banks are still failing.

And then, there is this gem, from Felix Salmon at Portfolio.com:

"I look forward to Treasury telling us, before it spends any more TARP funds, what kind of subsidy rate it considers acceptable. There’s a total of $453 billion in TARP funds not spent in 2008; if those too have a subsidy rate of 26%, that’s equivalent to government expenditure of another $118 billion. To put that number in perspective, the market capitalization of Citigroup and Bank of America combined is just $55 billion. Isn’t it about time we just nationalized them?"

Yes, why don't we?

Again, thanks again to FT Alphaville, one of the sites that has provided a large amount of my economics education, and from which I just lifted this post wholesale, only adding a bit of attitude.

1/17/2009

"Walter Benjamin's Blog"

My article, "Walter Benjamin's Blog," was just published on The Brutalitarian, Brute Press' online theory center. It's available under CC license in pdf, html, and odf format.

If you've paid attention to any of my posts here about the future of literature, production and consumption in the digital age, or Internet and Information theory, you may find this essay interesting. It is a summation of my thought regarding all of these, compiled into a (fairly) well-organized article. I think some parts are pretty entertaining. In addition, I do feel that the conclusions I draw about the future of literature and digital reproduction should be taken to heart by many involved in the affore mentioned. In other words, the article is not just rants or musings, but a contribution (I hope) to theory on the subject.

It's very serious! But, it's also not written expressly for people with a Master's degree in philosophy, so [you] can read it too!

I'm also pretty excited about it, and happy that it's complete. So you might here me echo it's existence through my various internet channels. Just fair warning.

1/16/2009

Hey, answer this question, jerks!

My comments section is not the most--well, populated. But, I thought seeing as it was Friday, a big day for blog traffic (if you could call it that here), I would have a small:

Comment Poll!

My question is this, readers of Welcome to the Interdome:

Obviously the agenda here is set by my will and my will alone. But, given that as many as 5 hits a day are from repeat visitors, I wouldn't mind hearing what sorts of requests I could happily deny.



So, what sort of topics do you enjoy reading about here?

a) pseudo-Marxist rants
b) pseudo-semiotic rants
c) stuff about writing/literature
d) stuff about the economy
e) wait, isn't a the same as d, and b the same as c? What is it with you?
f) random associations about internet memes
g) hilarious horse videos (I have thousands!)
h) do one of those funny self-interviews again, from back in the day
i) self-belittling comments about the lack of visitors to the blog
j) anything you write is wonderful, Adam--we're here because we like you!

I fully expect no responses. But, I offer these incentives to woo you into doing so:

1 response: I will immediately compose a post in that category, that will be the best one yet.
2 responses: I will compose a post in each voted category, or two for the same, if that's how the votes go. However, because I will be splitting my efforts, each article will only be of average quality.
3 responses: I will immediately begin to cop an attitude, talking smack about how popular my blog is. Then I will write one blog post about whatever I want, that will be of really poor quality because of my inflated ego.
4 responses: I will do nothing, except go have a snack.
5 responses: I will declare next week "Reader Appreciation Week", and write a post per day discussing the merits of each of the posters in turn.
6 responses: I will declare "Reader Appreciation Week", but each poster will only get a limmerick, that may or may not be about them.
7 responses: "Reader Appreciation Week", with photo-enhanced toasts composed for each (and some for myself).
8 responses: yeah right.
9 responses (or more): I'll make everybody something fun. You actually might like it. Reader Appreciation Week will be canceled.
0 responses: We'll never speak of this again. Or, in case that motivates you not to respond, I will speak of this all the time, in a whiny tone.

Bring the noise.

1/15/2009

Designer Theory for Designers

Today Bruce Sterling linked to an article on his blog by Bruce and Stephanie M. Tharp about the "Four Fields of Industrial Design". (The article is here; Bruce's original post is here.)

The article is pretty simple: basically, they feel that discussion of design and design itself is complicated by the fact that it has trouble defining its intention. In order to melt this complication, they devise a rubric of the four fields of industrial design, as such:

Commercial Design (design intended to turn a marketable product)

Responsible Design (design motivated by altruistic concern for a typically un-marketable group)

Experimental Design (design that pushes the limits, trying unconventional methods and motivations for 'pure research', for lack of better term)

Discursive Design (design meant to "communicate", much as one expects the more confrontational forms of art to do)

They conceded that most projects will overlap, but these general categories will hold. That's all pretty straight-forward and reasonable. The purpose is that:

"understanding the design landscape through these four, simple categories—Commercial Design, Responsible Design, Experimental Design, and Discursive Design—will help the profession, our "consumers," and ourselves better understand design activity and ultimately its potential in an increasingly complex world of ideas and objects."

And they enumerate how in more detail than I will.

But it got me thinking (as I often do): why don't we look at writing this way?

Well, for starters, the four categories are all based upon the intention of the work. Writers, while certainly taking their own intentions into account when considering their work (wait, what?) also have other factors in mind. I think there is much more concern for the "Craft" itself.

One might argue that the Craft falls under experimental or discursive, the two artier of the categories. Which it does, but also it does not. Both the experimental and the discursive intentions of design view their work reflexively, in terms of what it does to/for other people. Some writing could be taken this way, especially writing that does break some sort of barrier or cross some sort of line. For example, any work of fiction that was the subject of an obscenity trial could be said to be both experimental and discursive, pushing the boundaries of what is acceptable and the definition of artistic literature by questioning current discursive definitions. But what about someone who simply writes in the style of Burroughs or Ginsberg, now that both texts are recognized as valuable contribution to literature? That really can't be said to be experimental, and its discursive value would not be much different than the discursive value of any other piece of writing.

Most writers, I believe, would choose to write in a particular style because it is what is appropriate for the work, and stimulates them to write it as such. Far be it from me to argue that writing has any intrinsic qualities or essential characteristics to itself--writing is nothing without a writer and a reader. In this way, intention, as a force from outside the writing itself, is an axis that pins the three together: writer, writing, and reader (even if the reader and the writer are the same person). So intention plays a role, but the relationships between intention and these three planetoids are more complex that an analysis of the post-design market as relates to the product. The realm of ideas, conversation, and morals are negotiated markets just as much as the world of consumer electronic sales, and hence, simple "intention" falls a bit short in description. Why does a product do well? Simply because it was intended to do so? Or why does it stimulate conversation, or push the envelope? Even in the non-commercial markets, currency, exchange value, and commodity have various, complicated roles to play outside of simple intention--you don't need me to tell you that. So there are many different markets interacting, adjusting values and the status of objects, in a densely complicated web that pretty much defies anyone's intention.

But are these end, consumer markets the only thing that defines the creation of the product? Any designer could tell you there is much more to it that that. Many factors go into the design and production of an object; from cost of raw materials, to equipment needed, to durability of the design in shipping. Most of these, in times such as these, are monetary based--but they don't have to be. Ergonomics is a selling point both on the product end and on the worker end--because I guess it turns out that safety is more affordable than accidents in the long term. But what about other things like enjoyment of the task for the worker, or harmonious installation of the factory in its neighborhood? Are these foolish notions, or interesting design problems?

For the writer, they certainly aren't foolish notions. While it is not necessary for a writer to enjoy what he writes, I can assure you that the writing will be much higher quality (by any measure) if s/he does. There was once a fancy term for all these design problems in production, called "means of production". For a lot of people, it is still an important concept, no matter what you call it.

So what have I proved? That writing is not intention-oriented, or that it is? The relations of production, whether or writing or manufacture, are certainly symbolic, physiological, and capitalist markets of their own, and rightly so. So perhaps it is that our intentional scope must only be broadened: intention not only extends from the designer through the product to the consumer, but must extend in a web in all directions, from the designer to the product, to the worker, to the factory, to the city, to the transit system, to the sewer system, to copper piping, and back again to toilet design.

But I think I was right originally: there is a lot more to writing (and design) than even the most complicated "intentions" can encompass. Intention, after all, is merely what we discover after the fact. Why did you use that drill as a hammer? Oh, I intended to save time. No, you were lazy--because if you had though about it, then you would have easily come to understand that using an expensive, electric tool as a blunt force object would break it, wasting lots more than just the time to get up and get the hammer. Or a better example--why did you sell bad mortgages? Because you intended to help people into homes. Bullshit: you were greedy, and didn't care whether the person filling out the application was a family, a developer with no capital, or a con-artist. And even saying that is giving you too much of the benefit of the intentional doubt--greedy wasn't what you felt at the time, it was simply what we call your actions afterward. The strange machinations that make design, or any other procedure succeed or fail only later can be labelled as "good intention". To return to design as an example, the iPhone is only an example of good design after the fact, because it made a lot of money. If it had not, then it would have been bad design. The list of things that should have many money but didn't is as long as the list of things that shouldn't make money but do.

In the actual process of writing, and I imagine, in any other creative endeavor, there is a moment where production clicks, both in the hands and in the mind. This is about as far from intention as we could get. An author doesn't intend a verb to signify action, anymore than he intends to write the next great American novel (well...) because it is only in doing so that a verb is a verb, as it is only after the novel is finished that it could be considered within history. Symbols, like objects and people, do not intend anything, they simply do things in relation to other things. This is no more intrinsic than it is intentional--the fact of meaning is, as the philosophers might say, "relative," and by this I mean relatively meaningless. The fact that meaning occurs is, for all intents and purposes (no pun intended) miraculous. By this I mean as good as un-caused, and also as important as an act of god. The reasons behind it are theological in scope, and while arguable, certainly not directly relevant in their real importance. Just to be clear: I don't mean that the meaning of words comes from another dimension or streaming in heavenly rays from the firmament, or otherwise spontaneously bursting into existence. If god turned Lot's wife into a pillar of salt, he must have done it somehow, but I doubt that Lot or Lot's wife gives a good goddamn how he did it.

Back to the point: despite the unconscious, miraculous machinations behind words (ah... hint, hint!) the author, in lining them up one word after the other across the page, is working with them as unintentional fragments, using his own consciousness and intention like a needle to sew them together. This is a little sharp bit of an axis, but hardly one piercing them altogether. There are many more important factors to the author (and likely his audience as well), such as things as esoteric as plot, voice, and characters. Of course, we intend all of this to be "good", so that the writing is "good". And we intend them to do various things amongst themselves, organizing little markets of our own in the text. But knowing this and intending it is hardly a lesson on how to write well, is it?

Can you teach creativity? Can you make a rubric of the different categories of creativity? Can you intend creativity? There are lots of other things you can do, but I don't think any of these work. Creativity is, in a way, the opposite of intention, and its partner. You may intend something going into production, or afterward, say that you had such an intention. You might do the same with creativity--hoping for it in the future, and attributing it to actions in the past. But the difference is that at the very moment of production, in the act of producing, creativity may occur, where as intention never will. You might realize creativity when it hits, or not until afterward. Maybe you will eventually deny it, though others laud you for it. But when words flow out of the mind to the hand, creativity is the force that guides them, and this is where intention can simply not go. Intention can aim and pull the trigger, but creativity is the bullet that will or will not hit the target. Because, in actuality, it is the target as well.

So in the end, I suppose my point is this: the authors of that article are absolutely right. By thinking and discussing the role of intention in design (or anything else for that matter) the practitioners and users thereof will likely benefit by guiding their practice and recognizing their own potential. However, we all wish and intend lots of things--I hope that this thinking and discussing doesn't end there.

But maybe after reading this, you will wish that it did!

1/13/2009

Prose Fragment: It was...

I have made it somewhat of a rule not to publish any of my fiction on Welcome to the Interdome; not because I shun the blog format, but because this is more of a personal, narrative space for me than any of my finished writing is. Besides, I have Brute Press for that (though the poor beast has been a bit neglected of late in actual posted material).

But, I stumbled upon this fragment in my own personal data cloud and for the life of me can't remember or find any note relating to what the purpose or direction of this prose might be. And, as opposed to some of my other fragments, I quite like several bits of this. So, why not throw it out there?

Here: enjoy a bit of disjointed, unedited prose--think of it as unavoidably intentionally poetic, though not intended it as such.

I should add that I never write "this way". I typically write what I feel to be very direct, very transparent and lucid, though dense, prose; I don't like writing that is purposefully vague and bordering on meaninglessness. I consider this a flaw of my training in philosophy.

The title of this fragment is the file name under which it was saved.


It Was...


Somewhere in the universe is a writer that cannot sleep. And I say this after all the people are dead.

Tiresome, wakesome, troublesome truth is in insomnia. No sleep is a passing symptom--soon recovered, soon relapsed--suffering and celebrating: out-of-desire-made-dreary-in-duty to a deeper, underground, transcendent, NOTHING.

Nothing to do with sleep. It is the big, fat, dead gray of the difference between before the sun and after.

I press the button again and again and again. Out comes the most horrible, piercingly drab monotone beep--it is the sound of nothing. I hang up and look at the phone, expecting it to exist for ever.

That phone is the lord; cower in fear, launch the blasphemies that have been carefully sculpted over thousands of bloody short lives, praise its name and its pure holy tone. I reach to press it again, but my hand is thinking, thinking of nothing, and it knocks god to the floor; it clatters and its battery skitters out on the concrete.

"YOU SHOULD REALLY GET SOME SLEEP"

Nothing to do with it, not me or anyone else. It's the corpse of a whale, eaten by sharks, finally on shore. The journey is complete, and it was dead long ago. I've been awake the whole time and finally getting somewhere.

If I slept, who would know? Upon its single axis gravity foot the world spins free of the constraints of my consciousness. Sleep, wake, sleep, wake. How many days have passed? How much rest do I need? What do I do to need more rest? What do I do to get more sleep?

These questions are futile. The answers are easy.

There once was a writer who traveled the entire world in a paper sack. The sack was as big as a full-grown child, and he would ball himself up to that size and crinkle the worn fibers down over his head under the paper mouth kissed the floor. From inside his bag this man could fly anywhere that he chose, as long as the bag was there and he was inside it. One day, when feasting with emperors and dancing with queens, he hiccuped on the wine of some exotic locale, tripped, and split the sack wide open. He tumbled out of his tiny corporeal ball, and laid across the floor, stretched out like a whale on the beach. Home isn't home if you know you can never leave again, and so he cursed the journey as much as the destination.

Both the journey and destination are cursed if they're one and the same. Insomnia, and its twin, the increasing, daylight now, are the spawn of that unholy lord.

"YOU SHOULD REALLY SEE THE BEACH"

This History of the Internet

Found this video on 10membranes.


History of the Internet from PICOL on Vimeo.

Besides the worth of the video itself, it seems that it was made using PICOL. From the PICOL website:

"PICOL stands for Pictorial Communication Language
and is a project to find a standard and reduced sign
system for electronic communication. PICOL is free
to use and open to alter."

Well, okay!

There's not much to it right now, but they were able to make an entertaining and descriptive video using only 22 basic symbols. Considering my last post, that seems like something to watch. New internet magic fingers, miming a story of how the internet got made.

Good enough.

1/09/2009

idk, my bff Twitter?

I've been experimenting with twitter for about two months now (@interdome, or follow the link on the left) because, well, it just seemed like the thing to do. There's some crazy format sweeping the net, so I better check it out.

It's interesting.

I'm not going to speculate on its future right now, because I think it's still up in the air. There probably has not been enough exploration with the API to see its total use potential, and not enough wide-spread adoption yet to see whether the wave will crest. Maybe it's the new email/SMS, or maybe it will just be a half-mySpace belch. I've gotten spammed on it ("false friend" followers who only post about get rich quick shit) so that probably says that its getting popular, but still isn't perfect. Most of these false friends have their profiles deleted in a day or two, so that's a good sign. But if it's going to be a catch-up game the whole time, that's not good. But again, it wasn't making my usage inconvinient (just making me disappointed upon finding out that no, I didn't gain three followers on one day).

Anyway, the most interesting thing about it to me is how the format is defining the usage. Any technology, whether it is the pen or the typewriter, the telegraph or the camera phone, begins to dictate its abilities and limitation from the minute people start to use it. It can be designed with any purpose you like, but every carpenter knows that you can very easily use a drill as a hammer. Consumption forces a feedback on production in the case of most products, but especially with new technologies that may have a certain vision attached to them. And when it comes to information technology, the creative powers of the mind are the limit, with a huge amount of differential and latitude. Like some sort of scary SF concept (or the words of Dr. Seuss in The Butter Battle Book) "we don't even know all the things it can do", or in other words, how it will actually be used in the field. Twitter is a very interesting toy--very simple, but with a very constraining principle to guide its usage. You only have one-hundred forty characters, which is a fun game rule. Every web tool has some constrictions, but this is one that is very specific, and really plays upon its role and use. The reason is that the service started as an SMS service, and though it seems most people now access it through the web, you can still post and get feeds through SMS (which on most carriers is limited to 140 characters).

The content, therefore, is limited to either one sentence quips, hyperlink referrals (most often through TinyUrl, which is a site that will shorten any url to about fifteen characters by relinking through www.tinyurl.com), or a combination of SMS and IM abbreviations. Of course, each of these is really part of the same function: the ability to send a 140 character message to an opt-in network. However, I distinguish these as seperate because they seem to represent a much different mind-set towards the user and the receiver, based upon the functional use of the 140 character networked message. Like any new tech tool, design sets the foundation, but then use really determines the function of the tool. I see these three uses as the primary functions of Twitter, and at least in the present define the potential that it has as a form of networked informational distribution (which is what all of the internet is, after all). I'm going to discuss each of these in reverse order, from least to most interesting in my mind.

The SMS-like capacity is interesting. First, if you do not have a web-active phone, you can get a modicum of connection to the world of the networked internet. I believe there are linking apps between Facebook and MySpace and Twitter, which is another bit of access for those people. The network ability is the key here; sending one SMS can potential reach an unlimited number of people who opt-in to your personality.

But SMS is also the limitation; you still can't do anything through SMS that you couldn't do before (i.e. follow links, manage friends and profiles, etc.). As such, these sorts of Twitters read like SMS: @teen23- were @ da mall whr r u? Not so great. you can send these little updates to your network, and receive them from your network, but the message stays the same. What really makes this not so great is that anyone subscribing to your network now gets to decipher your SMS speak, and is rewarded with the enlighting knowledge that you: a) can't spell; b) are hvng a rly rly book time @ nad bar DRUNK hrhrhrhr; and c) that you not only need me to know all these things, but your whole network as well.

If you a sensible person, you might realize that these sorts of messages would alienate your network, frustrating them and potentially disconnecting you. You might then keep drunk-texting back in SMS where it belongs, rendering Twitter through SMS not so great. This is the same for any other SMS type message--if someone's list of updates is solely their location and current action (coincidentally, the original intended use for Twitter) then I am not likely to respond. If this information is not directed at me personally, then why would I be interested? If you want to meet at the bar, text or call; I'm probably not going to show up at your friend's film screening in Brooklyn when I'm living on the West Coast. But, this is also the reason I don't use MySpace or Facebook anymore--my network is just not that networkable. So others might disagree about this use for Twitter.

The second major use of Twitter--broadcasting links to your network--is a bit more interesting. Many blogs boil down to this: hey man, check this out [hyperlink]. This is a great thing, because without direction it would be impossible to search through the stew of material out there. Even a specific search now generates thousands of hits, so if you are not searching for something specific, someone's gotta tell you about it otherwise you would never know.

This is also the greatest potential for Twitter-spam I've seen (I've heard phishing is also prevalent, but I'm going to ignore this, because as long as communication exists, there have always been people dumb enough to give away information to those who should not have it). The TinyUrl actually feeds the spam potential, because in shortening the url it also hides the real home site, and you can't figure out what it is without navigating to the link. This is avoidable by only following links posted by folks in your network whom you trust. Simple enough (for some, anyway).

As a form of content, this also has its negatives to go with the positives. You can't get much commentary in along with the link in only 140 characters, only a short description if you are lucky. I love the commentary--its why I read blogs that are only a edited synthesis of the web like BoingBoing or Slashdot. The links I could find elsewhere, and by the time they've reached these clearinghouses, I most likely already have seen it somewhere else. But as long as there is witty/silly/insightful commentary about the link, I'll check it out again. This is stripped out in Twitter due to space constraints. However, sometimes this is a blessing. Some sites are clogged with some many link re-postings, I can't read them all. By filtering off the links that don't require commentary to a Twitter feed, I can get pure links, and I can get pure text. Sometimes this works best. Also, there are some folks that do not blog and only Twitter, because the mixed use of Twitter and its easy, short format appeal to them. Now I get their own personal links too, which are sometimes of more interest than the common denominator of a web clearinghouse. And, naturally, sometimes not.

What this has done is taken a particular aspect of blogging, the networked re-direction, and given it a cubby-hole exactly the right size. This is the most index-like quality of the internet, the hyperlink, combined with the most communitive, the network. It is a giant web of index fingers, pointing from one spot on the net to another. If you consider each Twitter user a "tag", the tag being their own interests and personality, what this does is essentially tag the entire internet. By subscribing to a friend's feed, you know you are going to get links tagged "Adam", culled from all over the web. It is a pretty amazing function, that seems to have grown completely without intentional design. The service provides 140 characters and a network infrastructure, and what individuals have done is to knit this service into a remarkable, personalized indexing service the likes of which hasn't been invented yet.

There is Digg, and Delicious (I'm not going to figure out where the dots go right now) which are similar, compiling the interests of various people across the internet. The difference is two-fold. These sites still work with subject tags and overall hit-counts. While you can personalize your network, the site works on general interest. In other words, you might "Digg" a site, but your motivation is general "Diggability" i.e. the potential popularity and the accessibilty of the topic, not targeted redirection. With Twitter you are broadcasting your link with your audience in mind, which greatly changes the links. It is not a general archive of good stuff, it is stuff worthy of indication, hence that internet index finger. It is not necessarily funny, insightful, important, or on a particular topic (though it may be). It is just worth posting. Some might quibble on this difference, but I think it is real and important.

The second difference is the network. Digg (and other sites) primary usage is the posting of links, so networks grow along those functions, including the general feed of all users. Twitter grows more haphazardly, because re-linking is not the only function of the network. You get links from people you know, but who are not necessarily interested in "Digging" the web. It is casual and less conscious. I return to the example of the index finger with reason. Pointing is a universal way of indicating attention. It is not a word, and though some may argue, it verges on not even being a symbol. It is a somatic motion linked in some strange inter-personal way to the eye. I point, and your eyes follow. Naturally a hyperlink is a symbol, but it is so locked into the technology of the internet (dare I call it part of our somatic-cyborg structure?) that anyone familiar with the internet will know what to do with a link: you click on it. The fact that TinyUrl takes any indication of where you may be heading out of the equation only strengthens the case. It is as if we are riding together in a car--I grunt, and point my finger at the passing scenery, and you look. Maybe I pointed for reason: because it is our destination, or an example of what we were talking about, or my favorite restaurant. Or maybe I just wanted you do see it, for some reason I can't fully cognize. Hey, look at that! And you look. Twitter is not just this, but it is a very interesting sort of hive-mind or flock-consciousness. I post a link, and I bet at least one other person will look, regardless of what my reasons for posting were. Just think about that for a minute.

Thirdly, and even more interesting than internet-index fingers, is the one sentence quip, or the 140 character phrase. What would you say to all of your network right now, given only 140 characters?

The Twitter haiku is clearly a first destination for many, and Twitter poetry is out there in force (and I'll let you be the judge of that). But when it comes to prose, Twitter poses an interesting twist on the idea of the Internet index finger.

If you ever listen to the way that folks talk--and by this I mean listen to the words that they say rather than what they are trying to mean--you might discover than most people waste an incredible amount of words. Spoken sentences are full of awkward pauses, misspeaks, and doubled up words, both superfluous and contradictory. If you take a look at parochial writing, you will see the opposite. Read every day business email, sent quickly without review, and you will find missing words and punctuation, truncated thoughts to the point where they are unclear or confusing, and spoken words translated into writing so quickly that we miss homonyms, to say nothing of ideas.

Writing for free purposes is different, I imagine because the tendency to read back over the text is greater, facilitating the writing process from words into prose. But when we through the ol' 140 character limit into the mix, we get a whole new ball game. When a user decides to write something for Twitter publication, the limit is immediately a constraint and a force upon the writing process, even before the first revision. There are different ways that the limit functions.

a) Is my thought brief enough to fit into a Twitter message? If it is too involved, it gets bumped to blog or to email.

b) Then there is the crafting of a message creative enough to interest the network--the network may be your friends, but there is a competition inherent in all social networking, whether it is taken serious or not. Not just for Top 5 or any definite rank, but the link and the click count is everpresent on the internet. How to be creative in 140 characters or less is a bit of a challenge.

c) After you have your message, there is the edit. If it goes over 140, it must be trimmed: no exceptions. This occurs in writing, and afterward. This is where it gets interesting.

Obviously, abbreviation is a big factor. The web has birthed thousands of abbreviations for common words and new, shortened slang. There are also the symbols, whether they be emoticons, the crazy Japanese symbols the name of which I forget, or other representation that is not abbreviation like, @, <3, 420, and so forth.

Then there is the more linguistic forms of shortening. What words and symbols are not crucial? Punctuation is often the first to get the axe. With every character being of equal value, eliminating three commas and a period can equal a whole word's worth of text. Same goes for apostrophes and quotes. Spacing can even be removed, with no content lost.

Word choice also tends to the simplistic. Adjectives, nouns, and verbs often lose they appropriate tense, or are discarded in favor of more efficient choices. Critics may say that this ruins language, but I disagree. Twitter is a word game, not a replacement for actual writing. It doesn't mess with language rules any more than Mad Libs, it simply seeks to pervert them for a purpose.

One of the more interesting aspects of this warping of language I have experienced is in my prepositions. I love punctuation: for me it is the life-blood of writing, giving it the proper meter and establishing a rhythm that carries one through the sentence. You might be able to tell this from reading my blog; I love using em dashes, semi-colons, and parentheses. So I try not to sacrafice these unless completely necessary. I might let a apostrophe slip, but would take a word out before losing a necessary comma. Where I find ample room to trim the fat is in prepositions. It is very easy to swing a sentence around so that "in" or "of" can fulfill the place of a "through" or "with". Sometimes they can be eliminated entirely. "When I'm through with work" can become "When I'm out of work" or "When work is done". "Throughout the day" becomes "in my day" or "of the day". "Without" can be "not in", saving one character, or "talking about" can be "talking of", saving a whole three. Perhaps it is because my prose tends to be a bit word-rich that I find it an challenge to prune it, but it really makes me think about the way that syntax works as I try to twist it into a smaller string.

Of course, you could just ditch proper grammar all together, but still you are tied to the rules of meaning. Function c) is tied to b), and they both sum to a). Twitter's format tied function to content by making communication not simply an open pipe, but a negotiation with meaning that is skipped in other sorts of communication. I find that messages in Twitter, compared to most other networked forms of communication, are more succinct and interesting, simply because they are written under technological duress. Naturally, the individual opinion on this will vary--but I still find myself intrigued by Twitter, whereas the nightmare of the MySpace page and the Facebook Wall have chased me away forever.

Twitter is interesting not only as a new technological tool, but as a linguistic tool. I feel that part of the reason that it seems to be latching on is not because it is filling a communications gap. There are many other tools available that duplicate various functions of the app. However, none of the competitors of Twitter combine such a unique lingustic rule set--the 140 character limit and the blind hyperlink--with such a flexible opt-in network. This, combined with the portability of the hardware that now is capable of accessing this network, has allowed a critical mass to develop that is shifting the way that online communication occurs, but not just by inventing and popularizing the "micro-blog". This is wholly new: the limited space of Twitter gives our communication networks a new communicative form in the "hyper-index" and the "network-exclaim." These are the new gestures of the digital realm--not just new symbols, but new modes of expression defined by the techno-somatic capacities that are their substance. It is from our digits that we gained our base 10 number system, and from our two hands that we first understood the principles of binaries. Now, in the latest extension of our mind's grasp, our fingers fly over keyboard and keypad in short, new gestures that are translated in terms of our fingers most complicated machinations yet. What will these new fingers and grunts be able to grasp and attract? Will it be long before there are scripts that we command with Twitter messages? Those who experiment with Arduino boards are already heading in this direction. What sort of programming languages will find their expressional dimensions in terms of 140? Will unique Twitter messages themselves come to be symbolic, each a rune inscribed into a ongoing semiotic, networked in real-time into an opt-in Internet consciousness?

Whoa.

My calculator sent back error when I tried to figure out the permutational number of possible Twitters with the twenty-six letters of the English language alone, meaning it is a number that exceeds 100 decimal places; that's an awful lot of fingers for us to send poking through the Internet.