In Association with Amazon.com

Who am I?

You can look at my home page for more information, but the short answer is that I'm a dilettante who likes thinking about a variety of subjects. I like to think of myself as a systems-level thinker, more concerned with the big picture than with the details. Current interests include politics, community formation, and social interface design. Plus books, of course.

RSS 0.91

Blogs I read

Recent posts

Directories on this blog

Top-level
/books
/books/fiction
/books/fiction/general
/books/fiction/mystery
/books/fiction/scifi
/books/nonfiction
/books/nonfiction/fun
/books/nonfiction/general
/books/nonfiction/management
/journal
/journal/events
/journal/events/nyc
/journal/events/ohio
/links
/misc
/movies
/rants
/rants/management
/rants/people
/rants/politics
/rants/religion
/rants/socialsoftware
/rants/sports
/rants/tv

Archives

Mon, 04 Apr 2005

conversational alignment
This is a post I've been thinking about for a while, partially wrote, but never got around to finishing. And I'm only finishing it today because I want to write another post that refers to it. Welcome to the wacky world that is my mind.

Here's the question of the day: why is it that we have better and longer conversations with people that we know well? It seems like it should be the other way around - with people that we don't know, there's endless amounts to talk about, since no history is shared. With our good friends, we know all of their stories, we know all of the inside jokes, things that would otherwise take thirty minutes to explain can be referenced in a single word. And yet I can often find myself talking for hours with my best friends, whereas with people I don't know, the conversation dies out in minutes, if not seconds. So understanding what the difference is matters to me, because I like good conversations.

After thinking about it for a while, I decided that it is not despite, but because of, those hours and hours that I have invested learning all of my friends' histories and inside jokes that we have good conversations. We have invested that time in developing an understanding of each others' mindsets. We can move past surface issues like definitional considerations and on to the really interesting idea cracking that lies underneath. We can use those inside jokes and references to skip over the boring parts and get to the heart of philosophical issues.

Essentially, all those hours we've spent learning about each other has let us align our reality coefficients, so that we are living in the same reality when we speak. As that footnote suggests, there has to be an initial similarity of reality coefficients to make conversation possible at all, but I think that reality coefficients can be jostled into closer alignment by steady application of conversation. The more we talk with somebody, the more we learn to view reality through their eyes, understanding why they place the values on things that they do. And by doing so, we can get to the core value differences and start exploring why those differ, which is often really interesting.

Meanwhile, with people we don't know, we can start talking, but the conversation will often get hung up on very shallow things like a sharing of history ("Where'd you go to school? Oh, MIT? Wow, you must be smart!"). And there's nothing wrong with that - you have to go through that stage to get to the more interesting stuff. But often, when faced with the effort of trying to get to know new people and put in the work necessary to get them aligned with my internal cognitive structure, I throw up my metaphorical hands in despair, and either go find some of my good friends or come back home and spew on my blog.

I guess this whole post is a restatement of the idea of exformation from The User Illusion, where exformation is the context that we use to interpret incoming communication. Since all incoming communication, whether speech or text, is relatively low bandwidth, it is up to our brains to unpack the coded information, using the "exformation" context, to make sense of it. I think the bit that is new here (although I haven't read that book in years so it's possible he talks about this) is the idea that a greater familiarity with somebody leads to a context that is more shared, and therefore communication that is less likely to be misinterpreted.

Huh. Just pulled out the book, and Norretranders doesn't quite make the point, but has an apropos quote:

The least interesting aspect of good conversation is what is actually said. What is more interesting is all the deliberations and emotions that take place simultaneously during conversation in the heads and bodies of the conversers.

With people we don't know, "what is actually said" is pretty much the same as "all the deliberations and emotions". Because there is no shared context, we are forced to communicate through the narrow bandwidth of speech. With good friends, a shared context of "exformation" has been developed so that we can transmit much higher volumes of information through speech because a few words will evoke whole sets of memories. As I said earlier, "things that would otherwise take thirty minutes to explain can be referenced in a single word". So our greater familiarity with each other allows us to have much broader exchanges of ideas because we are leveraging that familiarity to exchange vast swathes of information. Or to tie it into my recent line of thought, greater familiarity means building up similar cognitive subroutines, such that the same stimuli evoke the same reactions.

Anyway. More thought required. I think there's some interesting stuff here, especially in the idea that becoming better friends is a re-alignment of reality coefficients. And that leveraging those reality coefficients is why we have better conversations with our friends than with strangers. But I'm getting tired, and I have one more quick post to write, so I'll stop here for now.

posted at: 23:11 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sat, 02 Apr 2005

Context sensitivity
I've talked about the importance of context to cognitive subroutines before, but I wanted to pick up on it again this morning. I've just spent most of the last three weeks in New York City, living a very different kind of life in a different place. I walked almost everywhere I went, I was going to shows almost every evening, etc. And I was curious if, when I got back, whether it would feel weird to be back after having spent that much time living a different life. And the answer is no.

This is fascinating to me. If we only had one set of responses, I think three weeks would be long enough to start shifting those responses to a new paradigm. But that didn't happen. What I think happened was that the old routines didn't really apply in New York (things like the impulse of hopping in a car to get anywhere, or the idea that I should buy things in bulk), so I developed a new set of New York responses. As soon as I got back to my old life in Oakland, with its set of environmental inputs, the old routines were re-activated. But the thing that's really interesting to me is how seamless it felt. It didn't even occur to me that one set of behavior patterns should feel out of place until I asked myself the question whether it felt weird to be back.

I think this demonstrates the real power of context, that our environment controls how we respond so smoothly that we don't even notice when our behavior patterns are wildly different. We think of ourselves as having a central core of behavior (and to some extent we do), but it's amazing how easy it is to alter that behavior by changing the environment. The obvious examples are things like the Milgram experiment, where just by having a white-coated person in authority tell them to, people were willing to shock an unseen patient into unconsciousness. But it shows up in all aspects of our lives. We behave differently at work than at home. We behave differently on vacation.

This is why I think Lakoff's work on framing is so important. By changing the frame, we change the context, and people respond differently without even realizing it. Our behavior changes utterly seamlessly. Our consciousness papers over the gaps and makes it all seem consistent, even when it manifestly isn't.

It starts to get pretty disturbing when you think about framing as a form of brainwashing. Framing's goal is to change what people think, by changing their view on an issue. This is what I don't like about Lakoff's work, that he suggests that we must fight frames with frames. I think he might be right, that such a battle may be our only option, but I'd love it if we could teach people the self-awareness necessary to understand frames, understand context, and be dispassionate enough in their observation of themselves to see how their behavior changes in response to such frames. I know it's a pipe dream, though. Most people have a strong sense of themselves, believing in their continuity from moment to moment. Our consciousness is wired to preserve that illusion (which is an interesting question in itself - why should our consciousness do that? What benefit do we get from not seeing ourselves as a set of context-activated cognitive subroutines? And how did our consciousness get so good at explaining away all the little inconsistencies of our unconscious? I'm thinking specifically of the way that people have been hypnotized will rationalize their behavior even when it is ludicrous. Wow, this is a long parenthetical. Um, anyway).

While I think this line of thought is a little depressing, that we are nothing more than automatons responding thoughtlessly to our environment, there is one upside - it answers the question of how we change ourselves - we change the environment. I mentioned this before in the setting of social identity, but it is perhaps more widely applicable. I've been trying to figure out ways to modify my behavior for a long time, so maybe this will help. Perhaps to write more, I need to join some sort of writer's club. Certainly joining an ultimate frisbee league did wonders for my physical fitness. Would going back to grad school help put me in the frame of mind necessary to pursue work on social software?

On the other hand, I don't want to take this line of reasoning too far. I do believe there is a central core of tendencies that shapes how our unconscious cognitive subroutines develop. No matter how often I get plopped into a loud bar or party environment, I don't think I will ever suddenly morph into that cool dude who is utterly smooth with that situation. The cognitive subroutines are already in place to respond negatively to that set of inputs. To change that behavior would require a lot more than more exposure to that environment.

I suppose it's possible to do a slow morph, though. I mentioned this in the case of physical activity, but perhaps it's all about taking small steps, and changing one's response a little bit at a time. I've already taken a bunch of steps along this path, I think. I'm far more comfortable in dinner party conversations and the like than I was a few years ago. I can even survive in a bar or club like environment for a couple hours now, when I would have fled instantly several years ago. Continued exposure is starting to change my reactions. This is a case where the strategy of tossing somebody in the deep end to teach them how to swim is ineffective, I think. There's too much to process for that to work effectively. But by slowly changing the environment from one of comfort towards one of challenge, the cognitive subroutines will also be modified slowly, such that by the end, it will seamlessly handle the challenging environment and the person will be stunned at how easy it all seems. I think we've all had that feeling when we've learned something new, when we finally get it right - we say "Wow, that's easy!" with a tone of pleasant surprise. We never would have imagined that we could learn it, but by building up the behavior step by step, when it all comes together, it does seem easy.

There's some interesting stuff here. Who knew all this would come out of my initial question of "Does it feel weird to be home?" One thought is that I need to spend more time blogging. Just sitting down and starting writing, even with only a vague thought to start with. The exercise of developing an idea is one of those things that I need practice on, and the only way to get better at it is to practice and to continue to develop my comfort with it. Heck, one of these days, I may be able to turn myself into a real writer. Tomorrow maybe I'll get into some of my initial thoughts on Latour's book. Or maybe I'll talk about something entirely different.

posted at: 08:26 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 24 Mar 2005

Virtual cues
There was one particularly interesting topic at the dinner party which I'll record here so that I hopefully pick up on it later. We were discussing the role of technology-mediated communication such as cell phones and email in our lives. One woman was trying to make the case that we should give up on it, that it was only making our lives shallower and more wasteful, that it wasn't "real" communication. She made the good point that we would never conduct an interview over email, because there's so many cues that you pick up when you're talking to somebody in person. Given how much of my social life I conduct via technology, though, I had to disagree that it was a complete waste of time.

My contention, which I need to develop further at some point, is that we've had centuries to develop our ability to read physical cues. And we can still easily get fooled, because people like con artists take advantage of our trust. I think that we are starting to develop the understanding of cues necessary to make similar distinctions in the virtual world. In the real world, we're well trained to thin-slice and ignore most of the information coming in. I think a few of us in my generation have, and many more in the next generation will have, the ability to effectively parse information online at a preconscious level and ignore big swathes of it to find what we're looking for. I used the example of me versus my mother as far as chain letters and other net dreck - my mom will sometimes forward me stuff that I immediately dismiss as outdated or a scam or something, just because I've been on the net longer and have more experience with understanding what a legitimate email looks like. Or my ability to effectively use google and other online tools to find things in a few seconds that other people can not find in an hour.

We'll also develop better tools for managing our virtual attention - right now, you pretty much have to look at everything in your inbox, but as spam filters get better, we'll find ways to reduce the cognitive load of dealing with computer communications. I think. It's yet another interesting area of exploration for products that would be really useful, even though I don't really have a good picture of what they would look like.

We also discussed how the use of such technologies changes our communication. The difference between writing letters to keep in touch versus an email list, for instance. The letter is good for deep one-on-one communication. The email list is good for shallow group awareness. Is one of these "better" than the other? It depends on your values. I think both have their place. I'm definitely in much better touch with my college group of friends because of various email lists than I ever would have been if I had to write individual letters to all of them. At the same time, I have my core group of close friends who I see regularly, even though some of them live on the other coast.

As somebody pointed out, to some extent, the email lists promoting shallow community awareness are a virtual replacement for the small town community we once had, where everybody was peripherally aware of everybody else's business, thanks to a few gossip-mongers at the general store. Instead of being tied to a physical location, though, these communities are now online, a topic which I started to address in this old post, where I point out that until recently, "the idea of being able to form a community with people who were not geographically co-located with you was laughable."

I guess the point is that communication technology is not good or bad in and of itself. It's how we use it. Certain technologies encourage certain ways of interacting, thank you McLuhan, but we still choose which technologies we use. If I want shallower group interactions, I use an email list. If I want a one-on-one conversation, I use instant messaging or a letter or a phone call or a personal visit. Having more options at our disposal is a good thing in my opinion, so long as we master how to use them effectively. Otherwise we disappear into information overload. And that's where developing better virtual cues to guide us through these virtual communication spaces is a high priority. Hah! Managed to complete the circle and bring us back to where we started!

posted at: 08:22 by Eric Nehrlich | path: /rants/socialsoftware | permanent link to this entry | Comment on livejournal

Tue, 22 Mar 2005

What is powerful, part two
[Apologies for the barrage of posts - I'm trying to be more disciplined about spending a couple hours writing in the morning, and, well, I generate a lot of verbiage. The editing part still needs work obviously. But you'll have to suck it up. Or just skip it.]

In the previous post, I suggested a definition of powerful, as it relates to art and ideas, as being that which connects people. But being the contrary person I am, I'm immediately going to offer another viewpoint. Last night while thinking about what the value of a network of ideas was versus an individual idea, I wondered if I could tie this whole discussion into the science of networks, as described in Six Degrees. Perhaps in the tipping point phenomenon. I mentioned in my first cognitive subroutines post how I occasionally have flashes of insight, where ideas realign into a new pattern. Is that a tipping point in my neural net? Do different people have different threshold levels of evidence, such that some generalize quickly, and others need a preponderance of evidence?

Then another thought struck me. The thing that makes the small world phenomenon work is the unanticipated links between disparate parts of the network. The small world phenomenon doesn't work if people only know their local friends. It's only when a few people (not many at all according to Watts) can link their local set of friends to a set of friends far away. The far links are the powerful ones that make the entire network "small".

Once I thought of it that way, the extension to ideas was obvious - ideas that connect wildly disparate modes of thought are powerful, because they link up different areas of the idea network. The most powerful ideas are the ones that cross disciplines, connecting things that nobody thought were even related. Maxwell unifying electricity and magnetism. The electron shell theory providing a basis for the chemical periodic table. I like this perspective because it makes the connection to the science of networks explicit. We can think about how the different idea networks interrelate, and how to construct links between them that will make the idea network as a whole more compact.

So this is a different definition of powerful than the one in the previous post. That previous post started with art and moved to ideas; can I do the reverse and apply this new definition to art? It's unclear. What does it mean to connect different areas of art? To take one example, music that breaks barriers is often seen as revolutionary. Rock and roll built off of the blues, but brought it into the mainstream. I suspect the same is true in art, but I'm not sure I know my art history well enough to come up with any examples. Perhaps Gauguin's incorporation of Pacific Island art into his work.

Now we have two definitions of powerful. One is about the effect something has on us personally, and our connections with each other. The other is about the effect something has on the network, growing the capabilities of the network by providing more links, where the advancement of the field is perceived as being a good in its own right. Is one definition "better" than the other? It's hard to say. But I find it interesting that my speculation on art as a web has opened up into this whole separate discussion on value and power. Down the rabbit hole we go.

posted at: 09:08 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

What is powerful?
In yesterday's post, I quipped "art is in the network, not in the nodes." While walking around yesterday, I started trying to figure out what I meant by that. It's a cute quip, but what does it mean? I also wanted to tie it into the ideas I presented towards the end of this post, where I say "It's about the network of ideas. An individual idea isn't very useful or exciting to me. It's about how it hooks into a big picture." Again, the network, not the nodes.

Where to start? Let's start with the idea of value. Or to put it more bluntly, power. What does it mean to be powerful? In art, we think of a piece as being powerful when it has an effect on us. Generally an emotional effect, but it may have an intellectual impact on us. Picking up from yesterday's discussion, though, the power is not in the piece itself; it is in the connection between the piece and the viewer. We can all think of pieces of art that have a powerful effect on us, that are disdained by the world at large. The TV show Buffy is a good example - many would not even call it art, but it resonated strongly with me. It may not be powerful to the general audience, but it is to this audience of one. I think this demonstrates that the locus of power is not in the work itself, but in my connection to it.

What do we mean when we say a piece of art is powerful, when we imbue the object itself with that quality? We generally mean that it has a powerful effect on most people that view it. There are always going to be curmudgeons or naysayers who dislike any given work. But the greatest of works are the ones that speak to everyone. They bring people together, by evoking similar reactions in a whole group, demonstrating that no matter what their surface differences, they have the same reaction to this piece. They create an instant community. I think Brahms Requiem is a good example of this. When we performed it soon after 9/11, it brought the whole symphony hall together into a powerful statement of mourning and hope.

How does this definition of power extend to the world of ideas? Are ideas powerful insofar as they help create connections between people? This is an attractive definition. What is the single most powerful idea in the world? "For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life." This idea has bound together hundreds of millions of people into a single faith. It has provided the basis for innumerable communities, both local and global.

What are some other powerful ideas in this bridging sense? The idea of the scientific method is one. The world of science extends across nations and continents. Perhaps sports, as I mentioned in that instant community essay. It also explains why it's so important to me to do my thought development in a blog, in public, garnering feedback. The ideas in and of themselves are interesting, but what I really want is to think of ideas that provide a new viewpoint on the world to myself and others. And I can't do that in isolation, only in connection with others.

I think the interesting thing here is that we have a definition of powerful as the quality that allows people to connect to each other. Art or ideas do not have an inherent value; they have value in their ability to connect people. Being the social creatures that we are, we place the highest value on things that let us create social bonds among us. I like this idea a lot. It re-orients us to the value of human connection, and indicates that our connections with our friends and family are our most valuable possession. And that is a message that I totally support.

posted at: 08:43 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Mon, 21 Mar 2005

Art as a web
DocBug put up an interesting post, wondering why we put all the fame and glory on a particular artist, when their work is often the result of a dense web of collaboration, influences and support. I started responding to that post in a comment, and then realized I had a lot more to say than I thought I did, so I'm responding in my own blog.

Here's the basic concept. Our culture has a tendency to try to objectify things, not necessarily in a pejorative sense, but in the objectivity sense most commonly associated with journalism. That there is a thing, and it has these properties that are part of the thing's ineffable nature. That things are one thing or another, in a Platonic ideal sort of sense. By associating qualities specifically with an object, rather than describing the object as possessing a quality that it could later give up, it tends to confuse things. This is one of the reasons that people like Robert Anton Wilson suggest we use a version of English called E-Prime, which abolishes "to be" and all of its variants.

How does this apply to the situation in question? We want to be able to easily assign credit or blame to people, to have a simple relationship between cause and effect. To take an unrelated example, when somebody does something hurtful to us, it's easier to say "They are evil" than it is to understand why they might have chosen to take that action. It's simplistic thinking, but it has pervaded our society, and holds true in art as well. If we like or dislike an art piece, we give credit/blame to the artist. We tend to project all of our personal feelings and perceptions of the art onto to the artist, and, in our own minds, give the artist all of those qualities.

This is why it is so easy to get in an argument about art; two people may have very different reactions to a piece of art, which they both associate with the piece of art itself, rather than with their own relation to art. So they can't understand what the other person is talking about, because they are seeing two completely different pieces of art, even though they're looking at the same physical object. The meaning is not in the art itself, but in each person's individual connection to the art.

And this is where I think I can tie it back into the original point that Bug was making. Art has no value in and of itself. If an artist makes a beautiful piece, and nobody ever sees it, or if a composer writes a beautiful song, and nobody ever hears it, is it art? I would contend that it is not. Art is about creating that connection between the artist and the audience via the piece of art. In geekspeak, art is in the network, not in the nodes.

That's also true for the creation of art, as Bug points out. Art does not get created in a vacuum. Artists need tools to do their work. They influence each other. They are influenced by what's going on in society. Looking at a piece of art divorced from all of its sociopolitical context is almost nonsensical. It's making the mistake of assuming that the piece of art carries all of its context with it, that any qualities associated with the art are contained within the object, not in the network. I'm pretty sure I'm restating the basic postmodernist position at this point, from my meager understanding of it, so I'll leave it at that, and move onto another question.

How did we end up here? Why is our American society so inclined to try to stuff all of the properties of an object into the object itself rather than the network of relationships surrounding the object? How did we get to a position that our president could declare entire nations evil, and be taken seriously? (okay, that's not directly relevant to this essay, but I think it's a manifestation of the same phenomenon).

Here's what I think. A hundred years ago, Americans would have had a very different perspective. At that point, we were all deeply embedded in our communities. There was a tight web of relationships in any given town, as none of us could be self-sufficient, so we had to know the butcher, or the farmer, or whatever. (I'm idealizing here - go with it). This let us appreciate the power of the network, of realizing how we depended on each other in a long-term sense.

In the modern age, we've moved to a far more self-sufficient model, where our relationships with many people happens in a purely transactional mode. I go to the supermarket, I pick out some stuff, I hand them money, and I leave. All of the networks and relationships necessary to make that happen, from the shipping and distribution networks, to the bar code scanner, to the credit card reader, is hidden. It's implicit, not explicit. So I treat the supermarket, and all of its employees as mere objects, rather than as people. I feed in money, I get out groceries. No human interaction. To use Fight Club's description, we are a single-serving society.

I'm going to posit that Asian and European societies do not have this same object-oriented perspective. (Wow. I just realized that object-oriented is the perfect nerd description of it, because a software object in OO design carries all of its properties and methods with itself. Damn.) Asian societies because of the pervasive influence of Zen and Buddhism and Hinduism, which explicitly state the way that we are all interconnected. And European societies, because they have done a better job of clinging to the human side of interaction, of having the denser communities.

The connection between the American single-serving society and the American tendency to view art (and everything else) in an object-oriented fashion is still a bit fuzzy, but I think it makes sense. When we treat everything in our lives as objects from which we are trying to get stuff, and which we evaluate based on whether it has the qualities that we need at any given point in time, it's not surprising that we start to associate the qualities directly with the object itself, rather than with the network of relationships associated with the object.

I think there's some really fertile ideas here, especially in trying to think about what it means for the value to be in the network, how that could be measured, and how that could be applied if we recognized it explicitly. But I'm going to pick up on those another time. Or not.

posted at: 06:44 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 17 Mar 2005

Cognitive subroutines extensions
In my last post about cognitive subroutines, I extended the idea to allow for us to use other people as part of our internal routines. I was using this in the idea of team building, but this idea of leveraging elements outside of ourselves can be extended even further. While I was at the Whitney yesterday, I was poking around their bookstore and saw a book called Me++: The Cyborg Self and the Networked City, by William J. Mitchell. I picked it up, flipped through it, and every page I flipped to seemed to have an interesting observation. So I bought it on the spot. The other book I'd brought on this trip (Politics of Nature by Bruno Latour) was just proving too dense for me to deal with, so I figured I would read this instead. It's excellent. He describes how our individual selves are slowly melting into the environment where it's hard to say where our "self" ends. A great non-cyber example he gives is of a blind man walking down the street using a stick to navigate. Is the stick part of his sensing system? Absolutely. Is it part of "him"?

Tying this back into the cognitive subroutines theory, in the same way that cognitive subroutines can rely on other people to perform part of their processing, it's obvious that it can rely on other external mechanisms as well. I don't bother remembering where anything online is any more, because I can just use Google. On the output side, I don't have to think about the individual physical actions necessary to drive a car; I just think "I want to go there", and it pretty much happens automatically. So we can use elements of our environment to increase our processing power, and to increase our ability to influence that environment.

In fact, this is really interesting, because it gets back to a question I asked at the end of this post, which was how to reconcile this theory with the ideas in Global Brain. By expanding the scope of the cognitive subroutines to include external influences and external controls, we then build in the power of the collective learning machine, because each of us will choose which elements of the external environment to leverage. Things that are useful, whether as mental constructs for easing cognitive processing or as physical artifacts for increasing our control, will get resources shifted towards them.

This is essentially the idea of the meme at work. A good idea, a good viewpoint of looking at the world, is viral in nature. I come across a way of looking at things. I start using it, and it explains a lot to me, and I find it valuable. I start telling other people about it, whether at cocktail parties or via this blog. If they find it useful, they pick it up. And so on and so forth. It gets incorporated into their internal cognitive subroutines, and soon it is embedded so deeply that they can't distinguish it from "reality".

I was thinking about this recently in the context of books. I like reading, obviously. I like books with ideas, books that express a certain viewpoint on the world. I was trying to answer the question of why I read, what makes a book like Me++ so compelling to me? I think it is this opportunity for picking up new ideas, new cognitive subroutines that I can then apply elsewhere. I described in that original cognitive subroutines post that moment when a bunch of synapses light up, and a whole new set of connections are made in my brain. There's almost an audible click as ideas lock into a new formation. And books are a way of finding those formations. They are an opportunity to hook the ideas I have in my head into the unfathomably large set of ideas that is already out there in the space of human knowledge. Books help me to find ways to hook my ideas into those of thinkers past, as well as giving me the ability to leverage the insights of those thinkers, by not having to recreate their work.

It's about the network of ideas. An individual idea isn't very useful or exciting to me. It's about how it hooks into a big picture. This is probably because I'm a highly deductive thinker. When I was a physics student, I would struggle woefully for the first half of the term, as they introduced individual concepts in an isolated context. At some point, though, the light would go on, and I'd see the whole structure, and then it all made sense; I could see how the individual concepts fit together, and how to use them. I need those kinds of structures to sort through ideas. That may be an individual thing, though.

Anyway.

This isn't the clearest post I've done. But I like the direction this is heading. I think I have a provisional way of hooking the cognitive subroutines theory into the global brain network emergence theory. I like Me++'s idea of extending ourselves out into infinity, and how that applies. I like how I can tie it into my own tendencies, from liking to read, to deductive thinking. This is actually getting to the point where it's almost coherent and consistent. Now I just have to put together an outline. Yeah. Any day now.

posted at: 09:00 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sun, 13 Mar 2005

Cognitive trust
[Bonus post that I wrote at the airport last night]

I liked this quote from Emotional Design:

"Cooperation relies on trust. For a team to work effectively each individual needs to be able to count on team members to behave as expected. Establishing trust is complex, but it involves, among other things, implicit and explicit promises, then clear attempts to deliver, and, moreover, evidence. When someone fails to deliver as expected, whether or not trust is violated depends upon the situation and upon where the blame falls." (p.140)

This would seem to be the team equivalent of cognitive subroutines. I can imagine that analogous negotiation and trust-building is happening within the swirl of our subconscious as we navigate through the world. Stereotypes that seem to work well get reinforced, and encoded into cognitive subroutines. Assumptions that prove to be wrong are trusted less the next time, with more restrictions placed on their activation conditions.

It's interesting to me because it provides an obvious extension of the cognitive subroutines theory to interpersonal interactions, at least in a team sense. I've talked about team building before (and actually say something very similar to Norman's quote), and part of what I think makes a good team is that we can offload tasks onto other people; as I put it in that post, "my teammates trust me to deal with fixing the bugs; once it's reported to me, they forget about it and move on." A team can achieve more than the sum of its parts because each can farm out processing to others who are in a better position to handle a given situation.

It's the cognitive equivalent of labor specialization. If I'm good at software, and my coworker isn't, then it makes sense for them to ask me to perform a software task that they need to do, because I'll do it in far less time than them. In return, my coworker who is better in lab may run an experiment for me. Both of us stick to what we're good at, and we can leverage our expertise to make everybody more productive and efficient.

The other analogy that I like is that if we treat the brain as a set of cognitive subroutines that can call each other, then there's no reason not to think of other people as subroutines that we can also call upon. When we first start working with another person, we don't quite know what their API is or what their capabilities are, but as we learn to trust and respect them, we can learn to call upon them with little more overhead than we do a subroutine in our own head. It's kind of a bizarre concept, but it's the first step in the steps towards a Global Brain if it works.

posted at: 15:46 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 10 Mar 2005

Cognitive subroutines and learning
I was reading Emotional Design by Don Norman the other day, and he was contemplating ways in which we could leverage emotional machines to improve the learning process. This got me kick-started again on thinking about applications of the cognitive subroutines theory that I've been playing with. As a side note, I think I'm finally emerging from the dearth of ideas I was suffering for a week or so. Apologies for the banality of posts during that time.

So the question of the day is: How do we leverage cognitive subroutines for the sake of learning? What does this theory tell us about how to teach people something new?

I covered this a little bit in the footnotes of that first post. To teach somebody a new physical action, it requires breaking it down into easily digestable chunks. Each chunk is practiced individually until it's ingrained in the subconscious and can be performed autonomously. In other words, we build and train a cognitive subroutine that can then be activated with a single conscious command like "hit the ball" instead of having to call each of the individual steps like "take three steps, bring the arms back, jump, bring the right arm back cocked, snap the arm forward while rotating the body, and follow through". Watching toddlers figure out how to walk is also in this category. At first, they have to use all of their concentration to figure out how to take a step, but within a short period of time, they just think "I wanna go that way" and run off.

For physical activities the analogy to cognitive subroutines is pretty straightforward, and was what I was thinking of when I first came up with this idea. How does it map to other, less concrete activities? Let's take the example of math. We start out in math learning very simple building blocks, like addition and subtraction. We move from there to algebra where we build in an abstraction barrier. As we learn more advanced techniques from calculus to differential equations, we add more and more tools to our toolbox, each of which builds on the one before. Trying to teach somebody differential equations without them understanding calculus cold would be a waste of time. So in a relatively linear example like math, the analogy to cognitive subroutines is also straightforward.

What about a field like history? Here it becomes more difficult. It's unclear what the building blocks are, how the different subfields of history interrelate, and what techniques are necessary at each level. Here we start to get a better picture of where the cognitive subroutines analogy may start to fail. It applies when there are techniques to be learned, preferably in a layered way where each technique depends on learning the one below it, much in the way that subroutines are built up and layered. Trying to fit more broad-based disciplines such as history into that framework is going to be a stretch.

Perhaps history might be a better example of the context-dependent cognitive subroutines, where we have a few standard techniques/theories that get activated by the right set of inputs. So we have our pet theory of socioeconomic development and see ways to apply it to a variety of different situations (I'm totally making this up, of course, since I'm realizing that I don't actually know what a historian does). Actually, this makes a lot of sense. In fact, I'm doing it right now; I came up with a theory (cognitive subroutines), and am now trying to apply this theory everywhere to see how it fits. By trying it in a bunch of places, I'm getting a better sense of what the proper input conditions for the theory are, and can see how to refine it further.

So for history, the important thing to teach may not be individual theories, but the meta-theory of coming up with good theories in the first place. In other words, critical thinking skills. As mentioned in my new directions post, I think such skills are broadly applicable, from politics to history to evaluating advertising. With such meta-skills, there would be an infrastructure in place for building up appropriate cognitive subroutines, and for understanding the limitations of the cognitive subroutines we already have.

One last thought on the subject of cognitive subroutines and how they apply to learning. What does the theory have to say about memorization-based subjects? From medical school to history taught poorly, there are many subjects which are memorization-based. I don't think there's really anything to be gained here. Memorization, like cognitive subroutines, is all about repetition, but I don't think that the cognitive subroutine theory gives us any new insight into how we can improve somebody's memorization skills.

I also tend to think that memorization will become less and less useful moving forward, as I noted in my information carnivore post. Why memorize when you can Google? However, developing the cognitive filtering subroutines necessary to handle the flood of information available is going to be tricky. That was the point of that information carnivore metaphor, but it's interesting that it comes back up again in this context.

Anyway. There's some fertile ground here for thought, again trying to think of ways in which this theory can be less descriptive, and more prescriptive. I'll have to spend some time trying to flesh things out.

posted at: 20:26 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Wed, 09 Mar 2005

Clay Shirky on cognitive maps
Clay Shirky had an interesting idea in an article over at Many-to-Many, where he divides the world between radial and Cartesian thinkers. Here's how he makes the distinction:

Radial people assume that any technological change starts from where we are now - reality is at the center of the map, and every possible change is viewed as a vector, a change from reality with both a direction and a distance. Radial people want to know, of any change, how big a change is it from current practice, in what direction, and at what cost.

Cartesian people assume that any technological change lands you somewhere - reality is just one point of many on the map, and is not especially privileged over other states you could be in. Cartesian people want to know, for any change, where you end up, and what the characteristics of the new landscape are. They are less interested in the cost of getting there.

It's a handy distinction. The radial thinker says "Okay, this is where we are, let's see where we can go from here." The Cartesian thinker says "Over there is where we need to be. I don't care where we are, but let's go that way." It's the practicalist vs. the idealist, the engineer vs. the scientist. Incremental improvement vs. paradigm shifts. Shirky applies the distinction to help dissolve some of the differing perspectives on Wikipedia, and clarifies why he thinks the two sides are talking past each other.

The interesting thing was what happened when I tried to figure out which kind of thinker I was. My first reaction was, "Oh, yeah, I'm totally a radial thinker", thinking about my tendencies at work where I figure out the minimum change I can make to get something working right now. That's partially out of efficiency (aka laziness), and partially a result of having seen far too many Cartesian thinkers get bogged down trying to do a total redesign in a world of changing requirements. So when presented with a feature request, I tend to take stock of what I have already implemented, and think about the easiest way to kludge it to add the feature, rather than spend (waste) time thinking about what future features might be added, thinking about how I should design to handle the most general case, etc. From this viewpoint, it seemed obvious that I was a radial thinker.

Then I thought about it some more, and realized that in my personal life, I'm far more of a Cartesian thinker. I have a vision of an ideal, but it's far from what I currently have, and making a few minor changes will make very little headway in terms of moving me towards that ideal, so I don't bother doing anything at all. We can see this in my lack of progress towards finding a new host for this blog, or towards becoming a social software programmer, or even in little things like how long it took me to buy a bed.

So now I'm both a radical and a Cartesian thinker. That doesn't make sense. Except that I think it does, in light of my theory of context-activated cognitive subroutines. In one context, I think one way. In another, I think the other. When I poke and prod further, I can think of reasons why I have different opinions in different contexts; I'm a radial thinker at work because I've seen too many efforts fail at trying to achieve the ideal general case, whereas my approach of rapid prototyping and incremental improvement has done well for me so far. I'm a Cartesian thinker in my personal life because I tend not to compare myself to others, and instead compare myself to my potential, to a putative ideal version of myself. Different contexts, different identities.

And I can break it down even further. In my life at work as a programmer, I'm a radial thinker, as previously noted. In my dealings with management, though, I'm still an unrepentant idealist. I know there are reasons for timesheet software or process and micro-management, but I can see where I think we should be, and get really frustrated that we seem stuck in an entirely different part of the phase space. Such frustration is a Cartesian reaction, because Cartesian thinking (in Shirky's definition) doesn't accept reality as the starting point, but only as a possible destination. So even my work identity is fractured along these lines. Lots of grist for the cognitive subroutine theory in this seemingly simple observation of different thinking patterns.

I'll close with some thoughts on the radial vs. Cartesian dichotomy that Shirky suggests. In the long run, I think the radial thinkers will have the advantage, for all the reasons that Shirky has mentioned previously with regard to Wikipedia. Cartesian thinkers spend a lot of time discussing how things should be, and complaining that the world doesn't match the ideal they have in their head - danah's response illustrates this attitude where she says essentially that the radial thinkers' improvements are horizontal moves that don't address the underlying problems she was with Wikipedia (or Brittanica for that matter). Radial thinkers don't spend their time exploring the entire possible phase space of what might be possible; they start with the way things are, and get to work changing it. It's using one's effort efficiently. In my work life, some of my most frustrating coworkers have been incredibly intelligent PhDs who want to spend several months perfecting a mathematical model or nailing down every possible contributing factor to an analysis, instead of saying "Okay, it's good enough, let's see what we can do." Again, it's the engineer vs. scientist viewpoint. There's a place for the academics, and for the dreamers, to help imagine new ideals, and guide the incremental changes of the radial thinkers. But in the end, the radial thinkers are going to be the ones building tools and getting stuff done.

posted at: 22:28 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Mon, 07 Mar 2005

Wonderfalls
As I mentioned in my post on Firefly, I also got the DVD set of Wonderfalls in the same Amazon order. And I've watched that whole series now as well. My original review actually stands up pretty well even after watching the rest of the unaired episodes, in terms of describing the overall feel of the show.

I do think it was a pity that the show got cancelled. There were several excellent episodes that were never aired. Fortunately, the creators had a feeling they were going to be cancelled (they actually started their "Save our Show!" campaign before the pilot even aired according to one of the featurettes), so the thirteen episodes produced tell a relatively coherent story that has a happy ending.

I'm not sure whether the show's premise would have held up long term, though. The talking animals schtick is very cute, but the need for the "muses" to be deliberately unclear (e.g. "Save him from her!") to create wackiness and confusion gets more annoyingly obvious throughout the episodes. Of course, when the plot demands it, the muses can also be very clear (e.g. "Take a picture!" or "Lick the light switch!"). So they essentially end up as writer bailouts, letting the writer extricate themselves from situations at will. Or for writers to create ridiculous situations; the entire Heidi storyline, which dragged on for four episodes, was manufactured by the muses for no apparent reason. However, it let us see a lot of Heidi, played by Jewel Staite, who played the cute mechanic on Firefly, so that wasn't so bad.

One thing I noticed while watching the series is that the show totally depended on the wonderfully expressive Caroline Dhavernas. Her annoyance and exasperation with the muses shines through, even as she grudgingly does their bidding. It was even more apparent when I watched a couple of the episodes with the commentary tracks turned on, and even without the dialogue, you could track what was going on just by watching her face. In fact, all of the actors are excellent. I happened across a site that has shooting scripts, and while the scripts are fun to read on their own, they definitely reach a new level of humor with the reading by the actors, either in their comic timing, or their facial expressions, or even just waiting a beat before delivering their lines. The co-creators lauded their actors on the commentary tracks, and I think the praise is well-deserved.

Anyway, yeah. I recommend the series, if you like screwball type comedy with an overlay of existential angst and confusion. Several of the episodes are really funny - I was watching an episode last night and just laughing out loud at some of the dialogue and absurd situations. Plus, it's relatively cheap - $28 at Amazon for all thirteen episodes plus some featurettes - that's $2.22 per episode! Thumbs up.

P.S. Parents are still in town, brain is still dead. No interesting thoughts. I'm hoping to get recharged next week when I head to New York. I've got a ton of backlogged ideas to work on, but just can't quite get started on them.

posted at: 22:42 by Eric Nehrlich | path: /rants/tv | permanent link to this entry | Comment on livejournal

Tue, 01 Mar 2005

Prescriptive context
Picking up on the identity as context post (as an aside, I need to figure out a way to thread posts, like on a bulletin board, except with comments - I've got to start doing research on my blogging software options - yes, I know I've said that before), it's time to think about how such ideas can be used. This is part of my new attempt to move away from my typical passive descriptive stance and towards an active prescriptive role, because all the cool pundits offer solutions as well as new ways of looking at the world. And I want to be a cool pundit, after all.

One obvious consequence of the idea that we are choosing our identity by choosing our social groups is that we can modify our identity by putting ourselves in situations where the environment reinforces behaviors we want to encourage. I'm thinking specifically of Alcoholics Anonymous here, where part of the power of AA is the social structure that it provides to help alcoholics quit. It is always easier to do something when other people are doing the same thing around you. Our herd instinct takes over and helps to reinforce the behavior.

We can leverage our social tendencies even more explicitly. For instance, it is drilled into us that it is important to keep promises to others, that trust is the framework around which our society is built. It's entirely possible that such behavior is wired into us evolutionarily via social feedback mechanisms. So when we really want to change our behavior, we make an announcement publicly that we are planning to do so. Then all of the social feedback mechanisms are called into play, and we are more likely to stick to our resolution. This is the basic idea of the wedding, for instance.

As a specific example, I started this blog in part as a public resolution of this type. I had all of these interesting thoughts, but I would never get around to writing them down. Putting them in a blog, thereby getting encouragement and feedback from readers, made it easier to motivate myself to write down the next set of observations, which engendered more feedback, and so on, creating a virtuous circle of behavior modification. At this point, I think it's self-sustaining, where I am enough in the habit of writing that I don't necessarily need the public feedback, but it took over a year for that to happen. And I don't think I would have had the self-discipline to write consistently for a year if it were just for myself; as a counterexample, I have tried many times to start keep a personal journal, and always fail. So by leveraging my social instincts in terms of not wanting to disappoint my (few) readers, I was able to change my behavior.

Another example is the importance of teamwork to a project. On a good team, everybody is doing their best, not wanting to disappoint their teammates. The team jells, and synergistically achieves much more than each person would have achieved working independently. From a personal point of view, I tend to be more productive when working with a partner. I am willing to accept failure for myself, but I don't want to fail somebody else. Again, leveraging our social instincts changes the way we behave.

A further consequence of the "identity as context" theory is the negative connotations. I mentioned how it applies to cults in the original post, but it can be applied more widely than that. For example, expectations play a huge role in determining how we behave. I've alluded to this before in the context of education; kids that are told they're smart will often act smarter. Kids that are told they're stupid will act stupid. It's a self-fulfilling prophecy. Part of the advantage that gifted kids have is that they are placed in gifted programs, surrounded by other smart kids and say to themselves "Hey, I can do that!" They are placed in social contexts where they will succeed. Meanwhile, kids placed in a remedial program will think of themselves as stupid, blaming every failure on themselves, leading to a vicious circle of self-unconfidence.

So what's the upshot of this post? If we believe the idea that social context helps to determine how we behave and thereby who we are, then we can take advantage of the idea that, as I quipped last time, "I choose to be the self that is activated by this group." By choosing the right group, we can modify our own behavior and create a new self. It's never easy; changing one's tendencies is hard work. That's why it's so important to use all of the tools at our command to help reinforce such changes.

Man. This post was much harder to write than I thought it'd be. It just never quite came together. But I've poked and prodded at it for well over an hour now, so I'm going to give up. I'll write a clarifying post if necessary. I might take a break for a couple days to let some ideas simmer and see if I can come up with a clearer line of attack.

posted at: 22:43 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sun, 27 Feb 2005

Identity as context
Picking up on the cognitive subroutine thread, I had another thought yesterday. What is our self, our identity? To some extent, it is the holistic sum of all of our cognitive subroutines. After all, we judge somebody by how they react to different situations. At work, we like to see how people handle pressure. In social situations, we like people that are comfortable and easy to talk to. Since we don't have a way to read minds, all we have to judge other people by is the way they interact with us and with the world around them. There may be those that claim that we have some essential "character" that determines how we will react in a general sense, but I'm pretty skeptical of that idea (Aaron Swartz had a good post about "dispositionism" today). And I feel similarly skeptical about the idea of an eternal ineffable soul. Just so you know what my assumptions are.

What are the implications of the idea that we may be no more than the emergent interaction of our cognitive subroutines? If my speculation that the subroutines are activated by our environment and the context that we are currently in, it means that we are different people in different situations in a very real sense. If I'm hanging out with my college friends, I'm a different person than when I hang out with my family or when I'm at work. They each activate different aspects of my personality, changing how I react to things and the way I view the world. I know it isn't an earthshaking observation that we act differently in different social circles, but it's nice that it falls out of the cognitive subroutine theory so cleanly.

This puts our social interactions in a different light. In some sense, we look for groups of people that help us be the person that we want to be. Since each social group activates different aspects of ourselves, by choosing who we socialize with, we are choosing our identity. This is most obvious in high school with the forming of cliques, from cheerleaders to band members ("This one time? At band camp?") to nerds to burnouts. But it continues throughout our lives. We find people with whom we feel most comfortable, where we feel we can say "I can be myself." My current thoughts make me wonder whether saying that is equivalent to saying "I choose to be the self that is activated by this group."

Another aspect of the whole identity as context corollary to the cognitive subroutines theory is that it provides insight on why cults work. Everybody always asks how people get sucked so deeply into cults. Well-designed cults all share a few common tactics. The most important of these is to remove new cult members to an isolated compound where the cult members see nothing but other cult members. In the language of this post, it's removing any alternative contexts from their lives. No visits are allowed from family members, because that would elicit a different person than the one the cult is creating. In their isolated compound, they reward behavior beneficial to the cult, and punish unwanted behavior like questioning authority. Again, training of new cognitive subroutines.

What's another common cult tactic? Giving their members new names. The old name has too many cognitive subroutines associated with it, too many aspects of personality that the cult is trying to suppress. By giving the member a new name, the cult is essentially starting a whole new set of cognitive subroutines that have no connection with the old life. They are creating a new person, essentially. Names are powerful things. For a long time, I really think I behaved differently when I was around people that called me Perlick versus people that called me Eric. I think I've now harmonized the different aspects of my personality for the most part, but it's interesting to see how powerful a name can be.

Then again, I've always been particularly impressionable and susceptible to outside influences. When I was a kid, my mom could tell who I'd been hanging out with on any given day, because my speech patterns would actually change. I don't think it's anywhere near as extreme any more (there's a whole post buried somewhere in the idea of how we are all the sum of our influences, but over time, the influences become commingled so that it becomes harder to tease out individual influences), but I'm sure there's still an effect. For instance, I know my writing became more florid for a while after I read David Foster Wallace, with lines like "In our world of postmodern ironic world-weariness, something about the buzz as Barry Bonds steps into the batting box, as 40,000 people hold their breath together, breaks through our ennui and evokes images of a more primitive time, of gladiators and arenas. It's an exciting feeling. The mob mentality rises to the surface and we lose ourselves in it." So maybe it's just my perspective that sees identity as being context-contingent. But I don't think so.

One last caveat: I should emphasize that I'm not postulating that our minds are in any way actual computers that consist of self-programming subroutines. I do think that it's a useful metaphor for analyzing several aspects of human behavior, in a variety of contexts. I think that this post illustrates that it may even have applications in questions of identity. For me personally, it's a good reminder that choosing how I spend my time socially is choosing what kind of person I want to be. I could choose to be a soulless corporate drone. I could choose to be an alcoholic partier. I could choose to be an outdoors type. Right now, I seem to be choosing electronic ranting loner. Unabomber, here I come! Hrm. Maybe I should rethink that choice...

posted at: 19:57 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Fri, 25 Feb 2005

More thoughts on thin-slicing
I sent off a note to Malcolm Gladwell through his website with the nitpicks I mentioned in my review of Blink, in particular the height study and the Ted Williams story. Much to my surprise, Gladwell wrote me back thanking me for the observations and loving the Ted Williams story. Cool!

While thinking about it some more, I realized that the prejudice favoring tall people may actually be a form of thin-slicing in action. As the New Yorker article suggests, "In our height lies the tale of our birth and upbringing, of our social class, daily diet, and health-care coverage. In our height lies our history." If that's the case, then favoring tall people makes perfect sense. Tall people would tend to be healthier and stronger than short people in a world of scarcity. These days, when all of our needs are satisfied, at least in most of the industrialized world, the remaining variation is primarily due to genetics, but it would be understandable if some vestige of a bias towards height remains. So I took that idea and sent it off to Gladwell. We'll see what he thinks of it.

I also wanted to pick up on one of Beemer's comments where he points out that cognitive subroutines and thin-slicing are both ways to "optimize away mental processing". He lists a few examples such as peer pressure and deference to authority, where the answer you get will be right most of the time and is extremely energy efficient. Given that the situations where such strategies arise are not often situations where the wrong answer means immediate death, it's not surprising that our brains are optimized for efficiency rather than 100% accuracy. Man. I think I had another observation, but I've totally blanked.

One last thought on the subject for the night. At some point, I'm going to have to reconcile my thoughts on cognitive subroutines with the ideas of The Global Brain, which I quite liked. I don't see any obvious correlations between them, but since I currently find value in both of them, I feel like there should be a way to bring them together. More food for thought. But it's Friday night and I'm tired, so I'm going to drop it for now.

posted at: 22:10 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 24 Feb 2005

Firefly
[ed. note: As a complete break from the cognitive science type philosophy that has filled this space recently, we bring you a rant about television]

I finally got the DVD set of Firefly last week, and have now watched the whole series. For those of you who don't know, Firefly was a show created by Joss Whedon and Tim Minear. Joss, of course, was the creator of Buffy the Vampire Slayer, which is probably my favorite TV show ever. Joss also created Angel, a spinoff from Buffy, where he hired Tim Minear, who wrote several excellent Angel episodes and eventually became an executive producer. When Angel and Buffy were drawing to a close, Joss and Tim went off to create Firefly, a show that was a cross between a western like Bonanza and a sci-fi epic like Babylon 5. Yes, it was odd. Fox stuck it on Friday nights in their ratings death slot, and it got cancelled within ten episodes.

I thought the show was okay when it was first on. It didn't really grab me. I watched all of the episodes out of loyalty to Joss, figuring that it would eventually pull together, but it never quite did. It never achieved the sparkling dialogue or the heartbreaking character development that are Joss specialties. So I didn't get around to buying the DVDs for a while. However, when I was buying the DVDs of Wonderfalls, a show which also had its time on the air tragically cut short, I figured I should pick up Firefly as well, as a gesture of solidarity for Tim Minear. And I'm glad I did.

Part of the problem with Firefly was that the network showed the episodes out of order. In particular, Joss created this fantastic two hour movie pilot, where he introduces us to the myriad of characters aboard Serenity. The Fox network decided the pilot didn't have enough action, so they didn't show it. Instead, Joss and Tim had to scramble to put together an action-oriented mini-pilot episode where we are dropped into the middle of things and never really understand what's going on. Watching the real pilot first as intended makes a huge difference in connecting to the characters and giving the audience a chance to find its bearings. If we don't care about the characters, then we don't care about what happens to them, and all drama evaporates. It's all about the characters.

So rewatching the episodes in the proper order makes a big difference. It also makes a big difference to listen to the episodes where Joss does a commentary track, because he presents the big ideas that were driving the show. He had a grand vision of what he was trying to do with Firefly, and with that context in place, the show makes a lot more sense. But he did a poor job of transferring that vision to the screen in the admittedly limited number of episodes that he had. In particular, I think he got sloppy and failed to make the individual episodes as compelling as was needed to establish the franchise.

That's been a Joss failing for a while, though. The brilliance of the first few seasons of Buffy was that individual episodes were satisfying in and of themselves, while also serving to advance the arc. He was always on the edge of cancellation, especially that first season, as one of the first shows on a yet-to-be established WB network. So I think he strived to make each episode individually satisfying and compelling, and then layered an overall season arc on top of those episodes. In the first three seasons of Buffy, he basically had a very specific vision of the season arc, and of the waypoints to making that arc happen. And he'd write the relevant episodes. So when you saw a "Written and directed by Joss Whedon" at the beginning of an episode, you knew stuff was going to happen. He'd shake up the entire Buffyverse, and then leave it to the staff writers to fill out the new implications/consequences of the shakeup with a bunch of standalone episodes, and then when those were played out, he shook it up again. But he never failed to make individual episodes satisfying in and of themselves - they were added an extra dimension of pathos and drama with the knowledge of the series arc, but could stand alone.

Because he'd worked so hard on developing both the individual character arcs and the overall season arc, the season finales were events, where he managed to bring all of his arcs together and tie them all up in one episode. When I go back to some of those episodes now, I'm always amazed at how much stuff he packs into those episodes. The groundwork has been laid all season, and then all of this crazy stuff happens as the floodgates are opened. And because each character was so well-established, it's clear that they have to react to the situation in a given way. The inevitable conflict and drama that ensued as a result of each character being true to themselves is part of what I loved about Buffy.

By season four, though, that was no longer true. Season four has a bunch of individually excellent episodes, but the overall season arc is leaden at best. And, even worse, there are several episodes which have no point other than advancing the season arc. So basically about half of the episodes are a waste of time. In contrast, in the first three seasons of Buffy, even the worst episode would have some brilliant character interplay or some witty dialogue that would redeem it.

Even worse, by the time season four rolled around, Joss was beginning to believe his own hype, as a master show creator/writer/director. He started using his episodes as a chance to be an auteur, using experimental techniques. Hush in season four was basically an experiment in writing a Buffy episode as a silent movie. The Body in season five was a meditation on grief, with no background music and rough handheld camera work. Once More with Feeling in season six was Buffy re-imagined as musical.

Firefly continued to demonstrate this tendency. Listening to Joss's commentary on the pilot and second episode, Joss spent as much time commenting on the different camera angles he was using as on the show itself. He did mention these grand ideas about where the show was going to go. But he lost the importance of crafting each individual episode on the way to his grand ideas. Without the commentary, and as the episodes were originally seen on the air, the show appears kind of meaningless and pointless. Nothing interesting happens, because I didn't really care about the characters as they were presented on the screen. Now Joss might say that he wasn't given a chance to develop the story the way he wanted to, but that's lazy storytelling. Good storytelling doesn't require commentary. It's all out there on the screen. He demonstrated he could do that with the early seasons of Buffy. Unfortunately, he got lazy in the later seasons of Buffy where he had to explicitly lay out the themes he was exploring because he had never shown them on screen; I stole this point from David Hines's review of the season four finale where he says:

the "Slayerettes being driven apart" angle has been done so ineffectively over the season that the writers have had to hammer it on in the past couple of episodes to let us know that yes, they were *trying* to do something, and they hadn't just forgotten quality screen time for the supporting characters *really.* Accordingly, Fury doesn't have much choice but to make his resolution of the mess clumsy, hammering the plotline home even as he resolves it. The characters saying there have been problems substitutes for the problems' adequate development onscreen; this is essentially the writers saying to the audience, "Look, guys, we were *trying* to do something here, dammit."

Another issue with Firefly is that Joss was a total prima donna by this point. A great example is the episode "Objects in Space", written and directed by Joss. In the commentary, he describes the episode as an illustration of existentialist philosophy, dropping in references to Sartre's Nausea. Okay, it's kind of neat that he's figured out a way to project his college philosophy into a sci-fi show, but this was one of the first thirteen episodes of a new struggling series! The show was not nearly established enough to be able to waste one of his first episodes creating a philosophical meditation. He needed to be creating characters that we cared about and story arcs that actually had more than a couple seconds of airtime per episodes. His commentary points out some things that were maybe revealed in passing by the episode, but it's so subtle as to be on the verge of created in a postmodern sort of way.

Anyway. Not that any of this is relevant. I don't need much of an excuse to demonstrate my insanely detailed knowledge of the Buffyverse. So, yeah. I think it would have been interesting to see where Joss was heading with Firefly, especially with the X-Files-esque arc involving River. Apparently Joss was able to acquire the movie rights, so we'll at least get the first set of answers this summer, as the movie picks up six months after the last episode filmed for television. It's a pity, though. I find I prefer well-done television shows to movies. The depth of character and plot development that is possible over the many hours of a television season is much more satisfying than trying to wrap everything up in the two hours of a movie. I wish Joss had been able to rein in his excesses on Firefly, because I think it could have been a jewel of a series. Alas.

P.S. One addendum, a couple days after I originally wrote the above, but before I post it. I think one of the differences between Buffy and Firefly is that Buffy was more episodic. I think Firefly actually works better in a DVD format, because several episodes can be watched in close succession, which allows the viewer to get a better handle on the universe and on the numerous characters. When it was on weekly, it was very confusing, and never developed momentum. I think Joss severely underestimated the difficulty of starting a new show franchise, especially moving to a new network.

If I were him, I would have spent the entire first season introducing us to the characters and to the universe, in effect doing what he tried to do in a single two hour pilot. I think it would have worked much better if he'd started out with a few episodes with just the Serenity crew of Mal, Zoe, Wash, Jayne and Kaylee. Establish them first, establish their identities as the outlaws on the fringe of the universe. Episodes like Jaynestown and the Train Job would have been appropriate for this phase, because the other ship residents didn't contribute a whole lot in those. Then introduce Inara. A few episodes to let that settle in, including Our Mrs. Reynolds for the compare/contrast between Inara and Saffron. Then Book could hop aboard, and another few episodes establishing his character, and exploring some of his background that was only hinted at in the series. Then at about episode 13, right around February sweeps, introduce Simon and River, and, since you've already established the other characters and the universe, you can spend four episodes in a row setting the River arc in motion. Something like that would have been a more measured introduction to the series universe and made it easier for the casual viewer to get on board.

Instead the viewer was tossed into the middle of the universe, with lots of little snippets referring ahead to plots that had yet to be introduced (like the Blue Sun plotline that Joss refers to in the commentary). It was disorienting and offputting, and that's exactly what you can't afford when starting a new series.

Contrast the Firefly approach of starting with too many characters with what happened with Buffy. Buffy started with four main characters, Buffy, Giles, Willow and Xander. By the end of season one, Cordelia and Angel were added. Oz was added to the mix in season two, as was Spike. By season five, there were way too many characters for a newbie to the show to keep track of, but that was okay because the show was already well established at that point. In contrast, Firefly tried to start with as many main characters as it took Buffy three seasons to introduce. No wonder it was confusing.

Okay, I'm going to post this now because I've officially spent way too long thinking about this.

posted at: 23:03 by Eric Nehrlich | path: /rants/tv | permanent link to this entry | Comment on livejournal

Wed, 23 Feb 2005

Cognitive subroutines and context
More thoughts on yesterday's cognitive subroutines post after thinking about it some more, partially in response to Jofish's comment.

Jofish brings up the importance of leveraging the real world. We don't have to store a hypothetical model for everything in the real world, because we can use the real world to store information about itself, and use that to jog our memory. This is partially why people can find things more easily in a physical spatial environment than in a file system; the physical cues and landmarks of the real world help guide them to their destination. To some extent, the brain uses inputs from the real world to decide which of the cognitive subroutines to run.

This gets back to a running theme of mine that I never fully developed, which is the importance of context. I wrote a footnote post about it at one point, but never returned to the subject. One of the things that fascinates me about our brains is how incredibly contextual they are. For instance, my memory is totally associative. When I get to the grocery store, I often can't remember what I'm supposed to get, until I walk down the aisle, see something, and my memory is jogged. I've mentioned this phenomenon in social contexts as well.

When I put the importance of context together with the idea of cognitive subroutines, a neat idea pops out. Perhaps these cognitive subroutines are like computer functions in yet another way. They have a certain set of inputs which defines their behavior, much like a function prototype defines the inputs for a computer function. When our brain is presented with a situation with certain stimuli, it grabs among its set of cognitive subroutines, finds the one with the closest matching set of inputs, and uses it, even if it's not a perfect fit. In other words, these cognitive subroutines are called in an event-driven fashion based on incoming stimuli.

An interesting idea, but is there any evidence to support it? I think there may be in the existence of logically inconsistent positions. We all have positions on various issues that may conflict with each other. The canonical one is the person who is pro-life in opposing abortion, but pro-death in supporting the death penalty. How can the person reconcile these opposing viewpoints? Within a single hierarchical logical structure, it's difficult. However, if the brain and its beliefs are treated as a set of separately created cognitive subroutines, each of which is activated by its own set of inputs, then the contradiction goes away. Each belief isn't part of a large scale integrated thought structure; it's contained within its own idea space, its own scope to use the programming term. Within that scope, it's self-consistent, and it doesn't care about what happens outside of that scope.

Only if you make the effort to try to reconcile all of your individual beliefs do contradictions start to pop up. But it's a difficult task to break the beliefs out of their individual scopes, so most people don't bother unless they are philosophers.

And to tie this all back to my favorite unifying topic, of stories, the effectiveness of stories lies precisely in their ability to activate certain contexts within our brains. This is why Lakoff emphasizes framing; by framing issues in a certain way, the conservatives set the context that the audience uses and actually choose which cognitive subroutines are activated in considering that issue. Advertisers seek to take advantage of this as well; commercials showing beautiful women drinking beer are trying to activate certain cognitive subroutines to connect the concepts.

Wow. When I started this post, I didn't know I was going to be able to tie all of my hobby horses together into one overarching model, but there ya go. I know I'm ignoring a lot of details, and making a bunch of simplifying assumptions, and using an overly reductive model of the mind, and being unclear on language, but, hey, that's what you get when you read a blog. Eit.

P.S. The Firefly critique is written. I'll get to it tomorrow. Unless I end up expounding more on this subject.

posted at: 20:48 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 22 Feb 2005

Cognitive subroutines
This is going to be a relatively long post, mostly inspired by reading Blink, by Malcolm Gladwell, and Sources of Power, by Gary Klein, both books that explain how and why our unconscious decision-making capabilities are often better than our conscious ones, and also explain when such capabilities fail and need to be over-ridden.

I was sitting there thinking about these issues last week while sitting on stage during our concert of Schumann's Das Paradies und Die Peri. We have a section in the middle of the concert where we sit for about 45 pages with only a couple pages of singing in the middle to keep us awake. So for four nights in a row, I had plenty of time to sit and think. And on Friday night, I had one of those moments where I connected a bunch of ideas, and synapses lit up, and I found a story that really works for me explaining some of this stuff. I was actually sitting there in the concert trying to figure out if I could get out my Sidekick and send myself a reminder so that I wouldn't forget the synthesis, but I couldn't. Fortunately, the idea was strong enough that I jotted down the basic outlines when I got home. This is all probably pretty obvious stuff, but it put things together in a way that made a lot of sense to me, bringing together a bunch of different ideas. So I'm going to try to lay things out here.

The basic idea builds off of Klein's idea of expertise getting built into our unconscious. Our brain finds ways of connecting synapses that leverage our previous experience. Why does it do that? I'm going to assume that it's a result of the constraint stated in The User Illusion, that consciousness operates at only 20 bits per second. The information processing power of our conscious mind is very low, so our unconscious mind has to find ways of compensating for it.

Here's the basic analogy/story that I came up with, being the programmer that I am. When I'm writing code, I often notice when I need to do the same task over and over again. As any programmer knows, when you're doing something over and over again, you should encapsulate that repeated code into a subroutine so that it doesn't need to be copy-and-pasted all over the place. I would imagine that a self-learning neural network like our brain does a similar task. So far, so obvious.

This relates pretty well to my own experience as a learning machine. When I'm learning a new game, for instance, my brain is often working overtime, trying to figure out how the rules apply in any given situation, going through the rules consciously one by one to figure out what the right move should be. As I play the game more and more, I learn to recognize certain patterns of play so that I don't have to think any more, I just react to the situation as it's presented. This is what Klein describes as Recognition-Primed Decision Making. To take a concrete example, when I was first learning bridge, the number of bidding conventions seemed overwhelming. I had this whole cheat sheet written out to which I continually referred, and every bid took me a while to figure out. As I played more and more, I learned how each hand fit into the system, so that I could glance at my hand and know the various ways in which the bidding tree could play out. As Klein describes it, my expertise allowed me to focus on the truly relevant information, discarding the rest, allowing me to radically speed up my decision making time.

Back to my story. Thinking about wiring our unconscious information processing architecture as a bunch of subroutines leads to a couple obvious conclusions. For one, it's easy to imagine how we build subroutines on top of subroutines. A great example is how we learn a new complicated athletic action. It also applies on the input side.

Another obvious result is that because subroutines are easy to use cognitive shortcuts, they may occasionally be used inappropriately. What happens when a subroutine doesn't quite fit what it's being used for? Well, in my life as a programmer, I often try to use that subroutine anyway. It doesn't end up giving me quite what I want, so I find a way to kludge it. I'll use the same subroutine, because I don't want to change it and mess up the other places that it's called, but I'll tack on some ugly stuff before it and after it to compensate for the ways in which it doesn't quite do what I want.

How does this relate to our brains? I think a prejudice is essentially the same as a cognitive subroutine. It does a bunch of processing, simplifies the world down to a few bits, and spits out a simple answer. And, in most cases, the subroutine does its job, spitting out the right answer; it wouldn't have been codified into a subroutine if it didn't. Much as we may not want to admit it, prejudices exist for a reason. However, when we start to blindly apply our prejudices, using these canned subroutines without thinking about whether it's being applied under the appropriate conditions, then we get into trouble. Gladwell calls this the Warren Harding error.

What is the right thing to do? Well, in programming, the answer is to think about how the subroutine is used, pull out the truly general bits and encapsulate them into a general subroutine, and then create specific child subroutines off of that, assuming we're in an object-oriented environment. In general when using a subroutine, certain assumptions are made about what information is fed into the subroutine, and what the results of the subroutine will be used for. If those assumptions are violated, the results are unpredictable. A more experienced programmer will put in all sorts of error checking at the beginning of each subroutine to ensure that all the assumptions being made by the subroutine are met.

How does this apply to the cognitive case? I think this is a case where it gets back to my old post about questioning the assumptions. If we try to understand our brain, and understand our kneejerk reactions, we will be in a much better position to leverage those unconscious subroutines rather than letting ourselves be ruled by them; our intelligence should guide our use of the cognitive shortcuts and not vice versa.

This idea of cognitive subroutines also gives me some insight into how to design better software. I picture this cognitive subroutine meta-engine that tracks what subroutines are called, and strengthens the connections between those that are often called in conjunction or in sequence, to make it easier to string those routines together, eventually constructing a superroutine that encompasses those subroutines. It seems like complex problem-solving or pattern recognition software should be designed to have a similar form of operation, where the user is provided with some basic tools, and then the software observes how those tools are used together, and constructs super-tools based on the user's sequence of using the primitive tools (alert readers will note that this is the same tactic I propose for social software). I'm somewhat basing this on a book I'm reading at work called Interaction Design for Complex Problem Solving, by Barbara Mirel, where she discusses the importance of getting the workflow right, which can only be done by studying how the users are actually navigating through their complex problem space.

So there you go. Treating the brain as a self-organizing set of inheritable subroutines. I'm sure this is obvious stuff. Minsky's Society of Mind probably says this, but I've never read it. Jeff Hawkins's book On Intelligence probably says something similar as well (I should probably read it). And I suspect that Mind Hacks is in the same space. So it may be obvious. But, hey, it's new to me. And it just makes a lot of sense to me right now, in terms of how I learn to do new complex activities, and how it relates to my work as a programmer. I'll have to think some more about if this can actually be applied in any useful manner to what I do. And about the shortcomings of the theory.

P.S. Tomorrow we'll get back to more light hearted subjects, like why I think the TV series Firefly failed, with a compare and contrast to what Joss did right in Buffy.


new complicated athletic action: When learning a new action, we break the action down into individual components and practice them separately. When I was learning how to spike a volleyball, the teacher had us first work on the footwork of the approach. Left, right, left. Left, right, left. We did that a bunch of times, until it became ingrained into muscle memory. Then we practiced the arm motion: pulling both arms behind our back, bring them forward again, left arm up and pointed forward, right arm back behind the head, then snapping the right arm forward. Then we coordinated the arms with the footwork. Once the entire motion was solid and could be performed unconsciously, then we threw a ball into the mix. That had to come last because the conscious mind is needed to track the ball and coordinate everything else to make the hand hit the ball in the air for the spike. Only if everything else is automatic do we have the processing power to make it happen. If we had to think about what steps we needed to take, or how to move our arms, we would never be able to react in time to get off the ground to hit the ball. It's only because it's been shoved into our unconscious that we can manage it.

Another recent sports example for me is ultimate frisbee. I've been working on my forehand throw for the last year or so. After several months, I finally got it to the point where I could throw it relatively reliably while warming up. However, as soon as I got in the game, I would immediately throw the disc into the ground or miss the receiver entirely. It was incredibly frustrating because it demonstrated that I could only throw the disc when I was concentrating on the mechanics of how to throw the disc. As soon as I was thinking about where I wanted to throw the disc, or how to lead the receiver, the mechanics went away, and the throw failed. This last tournament I played, though, the muscle memory of the throw had apparently finally settled in, so when I saw a receiver open, I thought "Throw it there", and the disc went there. The relevant neural infrastructure had finally been installed, so that I could concentrate on the game, and not on the throw, and it was incredibly satisfying. I threw three or four scores, which was more than I ever had before, and only threw it away once the entire day, ironically on a play where I had too much time to think, so that the conscious machinery kicked back into play rather than letting the unconscious muscle memory do its thing.


input side: I guess the analogue on the input side would again be game play recognition. A beginning chess player will have to laboriously trace out where each piece can move and can maybe see the results of a single move. An intermediate chess player will recognize how to handle certain subsections of the board, and be able to project out a few moves. The expert chess player will instantly take in the position of the whole board, and understand how the game will develop as a whole. And this is definitely a cognitive shortcut born of repeated experience. This study demonstrates that chess masters perform vastly better than novices at being able to recognize and remember valid board configurations, but were no better than novices at recognizing invalid boards. In other words, because the novice perceives the board as a collection of individual pieces, they can not tell the difference between a valid and invalid board. Meanwhile, the expert, because they perceive the board as meaningful chunks of board positions, can rapidly grasp the game situation of a valid board, but the invalid board looks like nonsense, demonstrating that their brain is using its expertise as a cognitive shortcut.

More generally, when confronted with a complex situation, an expert can pay attention to the key experiential data and ignore the rest. Gary Klein describes how an expert always has a mental model to which he is comparing the situation, a story if you will, that describes what should be happening. When what actually happens differs from what he expects to happen, the expert knows something is wrong, and re-evaluates the situation, as Klein illustrated with several anecdotes from firefighters. And part of that model is being aware of when things _don't_ happen as expected. And it may not be a conscious model; in fact, Klein describes many instances where the firefighters attributed their decisions to a sixth sense or ESP. But it is a model born of experience; the unconscious brain has experienced the situation over and over again until it knows how certain factors will affect the outcome (Klein calls these factors "leverage points").

posted at: 21:57 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sat, 19 Feb 2005

Balancing control and autonomy
I previously linked to this New Yorker article on how the army is self organizing to handle the challenges of Iraq. After putting up the link, I had a conversation with a coworker that evoked some more thoughts. One was the observation that the army is composed of units, each of which can be run autonomously, from squad to company to battalion. The commanders of each unit are given overall mission definition, and are left to figure out how to use their unit to accomplish their goals. I wonder if a company could be structured that way, such that any unit would be functionally capable of operating independently. I think this is part of what "matrix management" is attempting to do, but it never seems to work.

Part of the issue is the unwillingness of management to give up control to their subordinates. Even when they do give up control, they often restrict behavior with processes and SOPs to such an extent that the subordinates have no freedom of action. There's some good reasons for that - the processes are often put in place to prevent bad things from happening to the company. However, by not giving the employee any freedom of action, the company is also preventing its employees from contributing in new and unforeseen ways. In other words, it's a balance between "doing no harm" to the company, and the risk/reward of giving employees control.

The right balance is hard to find. I think in an organization composed mostly of inexperienced people, the first choice might be better; McDonald's and the franchise mentality of having a three-ring binder of regulations exemplifies this. However, in an organization composed of talented, independent people, such restrictions are insulting (not that I have an opinion). Of course, the pendulum can swing too far, and give the employees too much independence; Malcolm Gladwell's essay on Enron describes the consequences of that. As usual, it's a matter of context; each company will have a different blend of competencies, and that blend should determine the management's approach to determining this balance. There's no such thing as the One True Management Style. It's always contingent. Managers, not MBAs.

posted at: 00:42 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal

Wed, 16 Feb 2005

Jamie Zawinski on groupware
Jamie Zawinski posted a rant about groupware yesterday, pointed to by both Clay Shirky at Many-to-Many and Joel on Software. Zawinski is famous for being one of the first employees of Netscape, and then resigning notoriously. His rant about groupware is worth reading, but I'll excerpt the lines that particularly caught my attention here.

So I said, narrow the focus. Your "use case" should be, there's a 22 year old college student living in the dorms. How will this software get him laid? ... instead of trying to build some all-singing all-dancing "collaboration server" where you're going to throw in all kinds of ridiculous line items like bulletin boards and task tracking and other shit, let's suppose you narrow your focus to just calendars.

Given that my thoughts on social software are similar, and my first thoughts about a tool I'd design are similar in scope to the calendar that he suggests, I figured I'd link to his rant as support for my theories.

posted at: 22:52 by Eric Nehrlich | path: /rants/socialsoftware | permanent link to this entry | Comment on livejournal

Tue, 15 Feb 2005

The Passion of the Geek
I was IM-ing a friend of mine a few days ago, and was telling her that I wasn't sure I wanted to remain a programmer, commenting that I wasn't really a geek at heart. She replied "you'll always be a geek though. you can be a pundit geek", which got us into a brief discussion as to what defined a geek. My attempt: "a geek is somebody whose passion for something overrides their fear of social ostracism", building off of Paul Graham's essay on nerds and popularity. Because that's pretty much what a geek is - somebody whose love of Star Trek or sci-fi or Buffy or computers or physics matters to them more than what other people think. It's why people often fear and envy geeks simultaneously; people fear geeks because geeks' blatant disregard for the social norms that they spend so much time trying to observe implies that perhaps those social norms are not the laws of nature they seem to be, but are, in fact, just arbitrary rules. They envy the geeks because it would be so freeing to not worry about what other people think and just pursue one's own passions, let the rest of the world be damned. So, in at least some parts of my life, I'm a total geek. In others, I am still all too prey to the insecurities of social ostracism. Part of what I'm trying to do with my life is find new passions to pursue.

I guess I don't have as much to say on the subject as I'd thought. But this will get its own post anyway, because I really wanted to title the post "The Passion of the Geek".

posted at: 22:18 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Mon, 14 Feb 2005

Presence in IM
danah boyd just put up a post about different styles of using IM (instant messaging), contrasting those who use it in an always-on way versus those who turn it on only to talk. It's an interesting reflection on the social cues that people lose when moving to an online world, and how it takes time to train newcomers to the new cues necessary.

One thought I had is that adding more contextual information might help to quicken the learning process. My friend Jofish just published a paper titled Communicating Intimacy One Bit at a Time, where he and his collaborators gave partners in a long distance relationship a piece of software that would light up a software LED on one partner's screen when the other partner clicked a button. The LED's brightness would slowly decay with time, indicating presence.

Perhaps a similar scheme could be implemented for IM, with different colors representing active communication versus presence, with a quick fade from active to passive. Idle time serves a similar purpose, but is perhaps ignored or unseen. Perhaps it's just a matter of making idle time visible and contextual through color to help alert relative IM newbies to social appropriateness. Or perhaps a more active scheme is necessary, with the user indicating their openness for conversation by clicking a button. As one of the post commenters pointed out, there's a wealth of contextual cues we use in real life, from eye contact to body position, to indicate that we want to talk. And such cues are limited verging on non-existent for current instantiations of online communication. I suspect that the people that get this right (and, no, AIM's graphical smilies are not the solution) will sweep the online world (shades of The Black Sun in Snow Crash, where Juanita's virtual facial expression work allowed patrons to "condense fact from the vapor of nuance").

P.S. I commented on the post itself, but figured I'd post here as well because I haven't gotten around to installing MovableType or some other blogging software that supports Trackbacks. Maybe I should just break down and pay somebody to host such software for me.

P.P.S. I actually wrote about five posts last night (Sunday night), but I'll post them one a day this week, which works out well, because it's a concert week so I won't have time to write anything before Saturday anyway. Let me know which method of dispersal you prefer, the single drip mode or the burst mode.

posted at: 23:13 by Eric Nehrlich | path: /rants/socialsoftware | permanent link to this entry | Comment on livejournal

Sun, 13 Feb 2005

What makes a game successful
Just a quick comment on this New York Times article about World of Warcraft.

"It's the difference between an immersive experience and a mechanical diversion," Mr. Metzen said. "You might spend hundreds of hours playing a game like this, and why would you keep coming back? Is it just for the next magic helmet? Is it just to kill the next dragon?

"It has to be the story. We want you to care about these places and things so that, in addition to the adrenaline and the rewards of addictive gameplay, you have an emotional investment in the world. And that's what makes a great game."

This is wrong, wrong, wrong. Absolutely wrong. I'm astonished that a representative of a game company as successful as Blizzard could even say something like this. The thing that keeps people coming back to a game like that is the other people. Period. The only killer app in the history of computer technology is human communication. I was an early player of MUDs, way back when. The games themselves were utterly primitive, text based adventures with simple combat rules. But they were addictive and enthralling because I was interacting with people all over the country. I wasn't a sixteen year old twerp; I was Kamikaze the mighty thief. I earned respect based on my actions in the game, not on who I was in real life.

And, from everything I've heard about the current generation of MMORPGs (massively multiplayer online role playing games), social interaction is still the main attraction. Friends use it for hanging out together. Others use the game as a way of establishing an in-game reputation that they could never achieve in real life. It's not about the story that the game creators write. It's about the story that the players are creating together, the community that they are building. And any game creator that doesn't understand that will get frustrated by why the players aren't doing their uber cool puzzle area. Things never change; wizards on the MUD I used to play on would complain endlessly about the stupid players who wouldn't explore their areas. They didn't get it. It's about delivering to your players what they want. And they want opportunities to create their own story, not play yours.

posted at: 22:28 by Eric Nehrlich | path: /rants/socialsoftware | permanent link to this entry | Comment on livejournal

Wed, 09 Feb 2005

Followup to Trust, but Verify
I wanted to pursue a couple things I mentioned in my last post. I speculated that customer enthusiasm might be a sufficient factor in making decisions in my P.S. to that post. But I was thinking about it this morning and realized that there are some great counterexamples to that. Apple has a nearly cult-like following in terms of customer satisfaction and yet has never broken through to the mass market. They've done okay, of course, but never as more than a fringe industry player. BMW is another good example of a company that elicits great customer satisfaction while serving a niche market. I'm not sure what it means, but it does poke holes in my theory that a great story and customer satisfaction is enough.

For many things, quantitatively and analytically maximizing customer value and throughput is the way to go. Very few of us have brand preferences for things like toothpaste. The different brands are fungible. So the companies can't rely on building a brand and eliciting customer satisfaction. It's a numbers game of minimizing product cost and maximizing customer selection. And that _can_ be handled analytically by the tools that Bonabeau describes.

Another great example is Amazon. Every now and then, when you go to the Amazon web page, you'll get an alternative user interface, where they've moved some things around. You go back 15 minutes later and it's back to normal. What's up with that? Apparently Amazon occasionally has some new UI ideas that it wants to try. It changes its front page for a while. 10,000 people try it. Then they switch back ten minutes later after they've collected enough data. And that's a large enough sample that you can observe statistically significant effects. I read an article at one point that described how Amazon tested the position of the "one-click ordering" button in various places, and determined that the place where it eventually ended up increased the likelihood of ordering by 1 or 2%. Seems like a minor change. But for their volume of sales, it translates into an enormous amount of money.

That's sort of what I mean by "Trust, but Verify". Their UI designers had some thoughts on how to improve the conversion rate. They mocked them up, tested them, got real data, and was able to make an informed decision. Bring the iteration time of finding results down, and you increase performance, and reduce the penalty of making poor decisions. I'll talk about this more when I finish reading Experimentation Matters.

posted at: 00:07 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal

Mon, 07 Feb 2005

Trust, but Verify
After hearing me talk about how much I enjoyed Gary Klein's Sources of Power, a friend of mine forwarded me this Harvard Business Review article, titled Don't Trust Your Gut, by Eric Bonabeau. Bonabeau takes on the recent books promoting the use of intuition in business, calling out Gary Klein specifically, and attempts to make the case that trusting intuition is dangerous in situations involving complex interdependencies, because "the required computations outstrip the mind's processing capabilities". He recommends using "a computational decision-support tool".

If I were more cynical, I would speculate that Bonabeau works for a company making such tools, but I'll leave the ad hominem attacks aside for now, instead attacking his ideas. For one thing, the very idea that complex interdependencies are more likely to be correctly tracked in software than in people's brains is laughable to me. Social software, the best attempt of many smart people to capture the complex interplay of social relationships among people, is so inadequate that it can be described as autistic, despite trying to do a job that each of us can handle unconsciously. To expect that decision-support tools can capture even more complex situations in a realistic and useful manner is idealistic at best.

I think that the author significantly misunderstands the point of research such as Klein's. Of course you should use algorithms in cases where the inputs and outputs of a process can be well described by numbers. But in the real world, the numbers are often so distorted as to be unrecognizable, if not completely made up (classification schemes are a good example). A good executive will know how to drill down and get to what's really going on, where a program will take garbage in, and produce garbage out.

I would also cynically observe that most of the time, the quantitative decision making that happens is there for one, and only one, reason: to justify the decision that was already made by the gut reaction of the person in charge. The person in charge doesn't want to be left responsible, so they force all of their subordinates to generate numbers to support the decision that's already been made, and the subordinates are told when the numbers are "wrong" and need to be "tweaked".

Instead of the model that Bonabeau proposes (spend lots of time up front with decision-support tools before making a decision), I would propose an alternative vision for management, courtesy of Ronald Reagan: Trust, but Verify. In other words, trust your gut and spend your time figuring out how you can verify or invalidate your gut decision as quickly as possible. This is somewhat influenced by the fact that I'm currently reading the book Experimentation Matters, which recommends front-loading any project with experiments so that you can change course sooner if you're going down the wrong path.

I believe that this approach makes more sense for a variety of reasons. From my own personal experience, when I was Rush Chair at TEP, working in a complex time-crunched environment running the lives of 22 brothers and a multitude of freshmen, I quickly found out that it was far better to quickly make the wrong decision than to dither and eventually make the right one. If I made the wrong decision, people would go off, try to do it, figure out it didn't work, and do the other choice. That would often happen in less time than I would have dithered in making the original decision. And even after all that dithering, I would often have made the wrong choice, so I just delayed the inevitable. Making decisions quickly, even the wrong ones, often lead to getting on the right path sooner.

Admittedly, this only works if you can get immediate feedback on a decision. But the tools to do that are growing more powerful every day, as the book describes. I think simulations are at least as likely to be accurate than most of the software that Bonabeau describes in his article, and simulations let you see the results of your decisions in a very swift and controlled fashion. Go with your gut, see what happens, and revise. Create a tight feedback loop, and run through several cycles, evaluating the results each time (as a side note, this is a process that Gary Klein describes expert decision-makers going through in the field). I'm a big fan of rapid prototyping for engineering development (see the book Serious Play for a description) and really don't see why the same principles couldn't and shouldn't be applied to project management. Just as the classic "waterfall methodology" has been outmoded by strategies such as "extreme programming", I expect that the typical "Gate-Phase Process" to eventually be outmoded by an "Extreme Management" strategy. "Extreme Management", like extreme programming, would be a test-based methodology; you could make decisions from the gut quickly, but would immediately be looking for ways to verify that decision as quickly and cheaply as possible.

I think that if as much time and effort were spent on getting the iteration and feedback time for simulations down as is spent on the "decision-support tools" that Bonabeau recommends, the world of "Extreme Management" would not be far off. Maybe I should write a book!

P.S. Speaking of books, I wanted to get one more angle on this subject that's related to my other book idea, stories. Imagine that there are two managers pitching a new product to an executive committee. One has a great story, explaining exactly what niche her new product will fill, and lots of specific details of how it will change the lives of people who buy it. She has some numbers to support her idea, but her emphasis is on the testimonials she's gotten from customers who adore the idea of her product. The other manager puts up chart after chart of numbers demonstrating that there is an underserved market niche of some sort, but has talked to no customers and generates no excitement. Which one would be more convincing? I would say the one with the good story. A good story can be verified, and the numbers can be run. A product has to excite people; it can't just be a soulless attempt to describe how Rational Evaluative Maximizing Models will benefit. But I'm totally biased, of course.

posted at: 23:37 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal

Wed, 02 Feb 2005

The Internet as a Global Brain
This is a pretty minor observation, but while reading Gonzo Marketing on BART this morning, my brain cross-pollinated some of Christopher Locke's ideas on micromarkets with the ideas of Global Brain, and realized that the World Wide Web maps very well to Howard Bloom's conception of a Global Brain.

Let's review the elements that Bloom suggests are elements of a "collective learning machine":

  1. Conformity enforcers - Google provides this functionality for the Web by making the power law explicit, where "them that has, gets"; in other words, the preeminence of popular websites is reinforced because popular websites are the top results on Google, making people more likely to visit them.
  2. Diversity generators - The Web provides this by making it so easy to start a website of one's own. Everybody can start a blog with Blogger or LiveJournal. If your ideas are interesting, people will start reading, and you can be launched up the power law curve. Look at the extraordinary rise of bloggers like Kos.
  3. Inner-judges - I think this is the flipside of starting a website. If one does not attract a sufficient audience to keep one's interest, one gives up. There are a lot of dead websites and blogs out there. What the necessary level is to sustain interest depends on the person, but if we are not getting the feedback that we desire, we give up (no, this is not a plea for readership :) ).
  4. Resource shifters - There are all sorts of tools for resource shifting on the Web, where attention is the scarce resource. From del.icio.us, to bloggers posting links, to passing emails around, we all tell our friends about web pages that we find interesting. The more people we tell, the more attention a web page receives, until it has risen to the top of the Google ranks, and is the new conformity.
  5. Intergroup tournaments - This is the only element of Bloom's model that doesn't fit very well. Alas.

Anyway, I thought it was interesting that tools like Google and blogging correspond so well to Bloom's model. I'm not sure it means anything, but I thought I'd share the observation.

posted at: 23:18 by Eric Nehrlich | path: /rants/socialsoftware | permanent link to this entry | Comment on livejournal

Sun, 30 Jan 2005

Attention management system
In light of my interest in social software, I'm finally opening up a new category in my blog for it, to separate it out from the people rants.

Of course, this first post isn't actually about social software, except for possibly a bit at the very end. Part of what I'm struggling with right now is that I see this convergence of technology and sociology and even management starting to occur, and I have some intuitions as to where it's heading, but it's all so inchoate that I can't quite nail it down well enough to describe it. I spent a few hours talking with my friend Brad a couple nights ago trying to explain my intuitions, and all it did was make it obvious that I couldn't define even a single specific example of what I wanted to do, and how I wanted to contribute. I think I want to be in that space, but I can't figure out if I should be a tool-builder, a philosopher, a researcher, a writer or an evangelist. I'm probably in the space of being a philosopher-evangelist right now, I guess.

Brad also asked a good question: Where are today's collaborative tools lacking? It's a hard question, because I thought I knew the answer, but when I tried to explain, it turned out to be more elusive than I thought. The best answer I have right now is that today's tools, like pretty much all of software today, makes the user adjust to it, rather than adjusting to the user. Humans are infinitely more adaptable than technology, and so we live with all sorts of inconveniences because it's easier to adapt than it is to fix the problem (there's also an element of conservation of cognitive effort involved).

What would it mean for a piece of software to adapt to its user? It would mean that:

  1. It would be aware of its environment, able to adapt to the ever-changing context that we live in.
  2. It would remember past interactions with the user, and respond appropriately.

Unfortunately, both of these aspects are really hard, and easy to screw up. Let's take the case of Microsoft Word. Its attempt to solve the first condition was the Paper Clip, which would say things like "It looks like you're typing a letter. Would you like me to bring up the letter template?" It was annoying rather than helpful, and everybody I know turned it off as soon as they could. One of Microsoft's attempts to solve the second condition was the contextual menus, where it only displayed the menu options that you used recently. This turned out to be a user interface faux pas, because humans are incredibly good at taking advantage of physical location as a mnemonic, so by switching the menus around dynamically, the software was actually increasing the cognitive effort necessary to use the menus because the user had to read the menu entries each time to find out where things were.

So it's hard. But I'm going to take a stab at a case study to see if I can come up with something better for a really simple application: the to-do list.

Why the to-do list? Part of what I've been doing over the past few months is observing how I use technology. I figure I'm one of the more techno-savvy people around, and therefore technology issues that bug me now will probably become issues that affect the wider population in about two to three years. So I've been examining is how I use tools like email and my Sidekick in my daily life.

One of the interesting things I've observed is that I use those tools in particular as a way of managing my attention queue. When I think of something interesting, or something that I need to do later when I get home, I whip out the Sidekick, send an email to myself, forget about it and go back to what I was doing. Then, later, when I get home, I'll read my email, and be reminded at a time when I can actually do something.

As a side note, one of the elements in play here is the scarcity of attention. We can really only focus on one thing at a time. It could even be said that we're moving towards an attention economy, with supply and demand to be satisfied. So finding ways to manage how our attention is directed is a pretty vital skill, and will become more so as we spend our attention in an ever-increasing number of ways.

I am beginning to think of my attention as a searchable queue, currently managed mostly by email. For instance, when I think of something that I want to write about on this blog, I send myself an email from wherever I am with the word "blog" in the subject line, and a couple line description of the subject. Then, when I've cleared some time to actually write, I search my email inbox for "blog", select among the various ideas depending on my mood, and off I go.

I also use email to keep track of events I'm planning to go to, or web pages I've been meaning to read, or tasks I'm supposed to do. But it's a very crude tool, obviously. I regularly have to go through my ever-growing inbox, trying to remember what I wanted to do. Things fall through the cracks. Obviously, it would be better if I had a tool that would handle this for me explicitly: a technologically-enhanced to-do list.

Some elements of this tool that I think are important:

I think I'm capping it at that because if it's too much of a pain to enter tasks, then I won't use it.

What would the output be? Well, let's examine my current system. In my attempts to just get started, I've been making to-do lists on little scraps of paper with a mix of easy stuff (get a haircut) and long-term stuff (install linux on one of my old boxes). Then when I'm sitting around on a weekend, and decide to get off my lazy butt and do something, I pick up the scrap of paper, glance through it, find something that's of the right scale for what I want to handle, and start on it. The downside is that sometimes I glance at the list, and there's so much to do on it that I get intimidated and don't even get started.

So the interface I think would be interesting would be very simple. The software knows what day it is (in particular, weekend day or not would help distinguish whether it should select tasks that are long or short). It knows where it's being accessed (weekday day I'm probably at work, otherwise I'm probably at home if I'm accessing the software). When I have that nag to do something, I pick it up, hit the equivalent of Google's "I'm feeling lucky" button, and it will give me something to do. A single task, hopefully one appropriate to the situation. I can choose to accept the task, decline without stating a reason, or choose a reason (too long/too short, wrong place).

An interesting corollary of this system is that it will start to sort out my attention queue on its own. Tasks that get rejected every time they're brought up are sent further and further down in priority until they barely pop up at all. If I have a partial completion status, things that are partially completed would be moved towards the top of the stack because I clearly thought it was important enough to start. Things like that.

That would be the main interface paradigm. Then, of course, there would be the various tag-based restrictions, where I say I want a task of a specific type ("I want to write a blog entry"). I would also have the option of getting lists out if desired (e.g. "What errands do I need to run?" or "Show me all of the tasks labelled as quick"). There would also probably need to be a management mode, where I could edit the database of tasks directly, but that's only because I'm a geek.

Anyway. It's a thought. It isn't really social software, though. Why am I putting it in that category, then? Because it describes the approach that I'm interested in taking towards social software:

  1. Figure out a deficiency in what I currently use, in particular by observing what I currently do and the hoops I jump through unconsciously to manage my life and my social interactions.
  2. Figure out what the task is that I'm trying to accomplish that is impeded by that deficiency.
  3. See if I can think of ways to use context and memory and social interface design to deal with the deficiency.

As I mentioned in that first social software post, I think the approach to take is to design simple tools and see what gets used. And since I don't have a willing population to experiment on to determine what they find useful, I guess I'll use myself.

And even simple tools like this can become interesting very quickly if we include the possibility of interaction, of making it truly social software. Imagine a couple that both used this tool with a common database, preferably accessed through a cell phone. There could be joint tasks, or specific tasks. One could be at the store, and get the "shopping" tag list. Then, when the other stopped by the store later on their way home, they would do the same and find out that those items had already been bought.

Or extend it to groups. Imagine a bug tracking system based around this idea for a software group. Or a political organization trying to get out the vote before election day; having experienced the chaos, such a tool would definitely be helpful, if it didn't suck.

All sorts of possibilities. But I'll stop here, because it's a nice day, and Christy just called and invited me to go join her and UBoat for a hike on Mount Diablo.

posted at: 10:38 by Eric Nehrlich | path: /rants/socialsoftware | permanent link to this entry | Comment on livejournal

Links to old posts
I'm starting a new category in this blog on social software, but I wanted to provide links to several old posts that probably would have been filed here if I'd had this category.



posted at: 10:27 by Eric Nehrlich | path: /rants/socialsoftware | permanent link to this entry | Comment on livejournal

Sun, 23 Jan 2005

Launch chicken
A friend of mine at Signature shared the theory of Launch Chicken with me. Say you're in a project with a tight schedule with several different areas contributing to its success, say a product launch. Let's say that you know that the area you are responsible for is not going to make the launch. You're supposed to hit the abort button, and let the project manager know immediately. But, you know that another area is even further behind than you. So you hold out, hoping that they'll abort first, taking the blame for delaying the launch, and giving you the time you need to finish your area. Now it becomes a matter of will, like the original game of Chicken, where two kids are driving cars at each other. Who will chicken out first? Of course, what happens if nobody chickens out? Bad things, like the collision that happens in the original game.

How can such catastrophic distortions of information be avoided? My coworker and I were kicking the question around last week, wondering how a project manager would be able to make the right decision based on the carefully massaged data that they are fed at project review meetings. He asked the question, "In a great organization, do you think that the compression of information being fed to the decision makers is less biased/contrived, or are the decision-makers just superior at sifting out the truth from the pre-digested information they get?"

I think it's probably a combination of both (I'm always distrustful of bi-valued questions). I would suspect that good leaders are able to detect soft spots in people's presentations, where the numbers don't reflect reality, and go check out the raw data to find out what's "really" going on. By doing so, not only will they get a more accurate picture, but they'll also encourage people to present a more "honest" picture at the next presentation. It's a virtuous circle of trust and accuracy.

It also ties into my ideas of what an effective information carnivore looks like. Somebody who understands they are higher up the information chain, and are getting only pre-digested summaries of information, but understands their ability to open up those summaries to get a more complete picture. They can't do that all the time, because they are very busy, and they need to leverage the efficiency of the summarized form, but when problems arise, they understand that the summaries are inherently incomplete. Good information carnivores make good managers.

posted at: 20:11 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal

Trading Randy Moss
After reading this Pro Football Weekly article on possible trade scenarios for Randy Moss of the Minnesota Vikings, I wrote back to the author with one of my own. And, well, since I have a blog, I'll share it here. On the off chance it actually happens, it'd be cool to say that I called it.

My hoped-for destination for Randy Moss is the Atlanta Falcons. It works for Randy Moss, because he works best in a playground offense - "I'll go deep, and you throw it up high for me". Michael Vick, the quarterback for the Falcons, also works best in a playground offense - "I'll run around back here until somebody gets open, and then I'll launch it 70 yards through the air". (for those of you who don't watch football, Vick is the most absurdly gifted athlete in the league right now - he's faster than anybody on the field, and can throw the ball further than pretty much any other quarterback).

The Falcons desperately need a deep threat receiver. I came up with this scenario last week, but today's game against the Eagles just proved it. The Eagles were able to put 9 defenders in the box (normally it's 7), because the Falcons receivers just aren't threats - they can't get open. Now picture adding Randy Moss, the single most dangerous deep threat in the NFL. All of a sudden, you have to drop the safety back to protect against Vick flinging it 60 yards through the air to a streaking Moss in stride. Peerless Price, the current lead receiver, goes back to the #2 role that he's more comfortable in (his best year as a pro was playing opposite Eric Moulds in Buffalo), because he can often beat the #2 cornerback on a defense. The running game opens up, because the defense can't stack the box with defenders any more (this is even more significant because the Falcons already had the #1 rushing offense in the NFL this season partially due to Vick). The defense has to respect the pass _and_ the rush. And that opens up all sorts of playcalling possibilities. Play action becomes a brutal option, where Vick fakes to Dunn going into the line, the linebackers and safeties take two steps in to stop the run, then realize Vick still has the ball, and that Randy Moss has gotten behind them. Vick stops, launches it, and it's a touchdown. It would be simply devastating.

I don't think the Falcons have enough to interest the Vikings in a trade, but if I were them, I'd consider giving up a first round pick and some of their defensive line depth (maybe Chad Lavalais, a second year defensive tackle who'd be affordable for the Vikings).

I highly doubt it would happen, because, well, it'd be too much fun. A source did reveal today that the Vikings were leaning 60/40 towards trading Moss, according to ESPN.com, so we may get some fun trade scenarios this offseason. He'll probably end up with Baltimore, because they're even more desperate for a receiver than Atlanta. But Kyle Boller isn't nearly as exciting as Vick (although he does have the arm strength - Boller turned into a high first round draft pick when he demonstrated to scouts that he could throw a football through the goalposts from the fifty yard lines from his knees). So I'm going to hold out hope for my scenario.

posted at: 19:07 by Eric Nehrlich | path: /rants/sports | permanent link to this entry | Comment on livejournal

Thu, 20 Jan 2005

Cognitive effort
I bought a bed last weekend, and it was delivered two days ago. Yes, I finally decided that I should stop sleeping on the futon that I had bought used in grad school nine years ago. And two nights of sleeping on the nice new bed has made me go "Wow! Why did it take me so long to decide to do this?" A good question. One I actually thought about for a bit, and here's my answer.

It's a matter of energy and attention. We all have certain things that we don't question in our lives, whether it's our religion, our devotion to a given sports team (Go Cubs!), our affiliation with certain groups, etc. We can't question everything. While I love the idea of always being able to pry open the black box to see why something is the way it is, I can't always do that because it takes time and energy. Most of the time, I have to just accept the black box as is, and use it.

So I make a decision, and I move on, and I don't question the decision any more. Whether it's buying a car or a new laptop or what software to run my blog on, I find something that works well enough for the moment and forget about it, leaving more of my time and attention for things I find interesting, like reading or thinking about what I'm going to write on here. It's a matter of conserving cognitive effort for things I care about.

To give credit where it's due, this idea is mostly stolen from Paul Graham's essay on nerds, where he points out that most nerds are unpopular in school because being popular is a full time job (between choosing clothes, going to the right parties, etc.), and nerds don't care enough to bother.

So, in this specific case, every year or so I'd think about getting a new bed, and decide against it because I was sleeping fine on the futon, and a new bed is expensive. Each year the futon was getting worse and worse and my disposable income was rising, and this year the lines finally crossed, I got the new bed, and it was so easy that it prompted this post of wondering why it took so long. And that's often the way it is. My post about productivity laments this aspect of myself, but I think it's understandable in light of a theory of cognitive effort. Or maybe I'm just making elaborate justifications.

Oh well. Given that this is the fourth post of the evening, I think I'm going to shut up now, turn off my brain, and watch my tape of the O.C. recorded earlier.

posted at: 22:40 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Infinite games in childhood
A thought struck me this morning on my BART ride into work, in response to Carse's talk. He describes infinite games as where the point of playing is to continue to play. Doesn't this describe childhood? Over Christmas break, I was visiting some friends with kids, and I was playing Uno with their four year old. And he was just so happy to be playing that he didn't care who won or lost, or how he was doing; he was just excited about playing. Us adults get so worked up about winning and losing that we define ourselves by our results, but a bunch of kids playing baseball will often play for hours without keeping score because the point of the game is the game itself, not the result.

In fact, it's the adults that ruin kids by injecting finite games into their play. We all knew a Little League dad who was just miserable to be around because he'd be screaming at everybody because he wanted his kid's team to win. But, as Carse put it, "Evil is where an infinite game is absorbed completely into a finite game." To destroy that sense of play, that sense of joy, for the sake of something as prosaic as winning and losing is wrong.

It's interesting to think what a society based on a childlike state of mind would be like. I think I'd quite like it. Then again, it would essentially be the state of anarchy, which is a concept that appeals to me in theory. But in the "real" (aka adult) world, rules are necessary. People won't play nice with each other, alas.

It also makes me wonder when we lose that sense of childlike joy. Not everybody does, obviously, and the ones that don't are often among our most innovative thinkers (e.g. Feynman and Einstein). But most of us do. I certainly have. I never get that zap of "Wow, this is really cool!" any more, where I'm doing something for the sheer pleasure of doing it. I need to learn to be more immature again :)

Anyway, I thought that the observation that only adults play finite games was interesting. Thought I'd share.

posted at: 21:56 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Mon, 10 Jan 2005

More thoughts on gifted education
I've rambled about education before, particularly with regards to gifted education. But I've never been bashful about repeating myself. So here we go again.

Here's a thought experiment that a friend posited a couple weeks ago. From a purely academic point of view, how long would it take a smart kid, working at their own pace with appropriate guidance, to learn the material up through 8th grade or so? Let's say through basic algebra in math, reading and writing in complex sentences, some basic understanding of science, a first pass at American and world history, that sort of level (although, given the horrendous state of public education, that might qualify as a high school education at this point. Yikes!). My guess is four years. Or less. I did K-8 in 7 years, skipping two grades, and I could probably have skipped at least two or three more if it weren't for socialization issues (sixth grade, for instance, was a total waste as the teacher refused to let me work ahead because she felt there were certain things sixth graders did and that was that).

Those pesky socialization issues. What, really, are we teaching our children for those other four years? I can tell you what I learned. I learned that I don't have to work hard to succeed (at least in that environment). I learned that being out of the box often means being crammed back into the box. I learned that I can get away with mediocre work because nobody cares. And I went to an extremely good public school. I can't even imagine what it's like for students in a bad one.

It's really frustrating. I can see some of these acceptance-of-mediocrity tendencies in myself even now, which is how the topic came up when I was talking with my friend. It makes me wonder why we accept such an awful system if people really believed that children are our future. Or are we aspiring to the dystopia alluded to in The Incredibles, where because everybody is special, nobody is?

If I were a cynical Rand-ian, I'd claim that the school system, as presently constructed, is designed to habituate us from birth to not make waves, especially those of us that are smart, because ambitious smart people are disruptive innovators that change power structures. School teaches us to sit still, keep our mouths shut, and conform to the majority. We're taught to obey authority blindly (because teachers hate being challenged), which I think contributes to our acceptance of pseudo-science. If you squint the way I currently am, you can see many of the problems of our society reflected in our education system.

So what would I do differently? I have nothing that could be construed as realistic. To really teach kids right, you need to spend a lot of quality personal time with them, allowing them to pursue their interests in a guided fashion. There are some things that everybody should know, like the basics I outlined above, but beyond that, leveraging the natural enthusiasm of children would seem to be a natural thing to do. And given that children are natural scientists, it seems like we could take much better advantage of that than we currently do with our memorization of orthodox science dogma. Not that I'm saying we should doubt the current scientific paradigm, but that we should give students the opportunity to ask why and, when possible, figure out where the paradigm came from, as Postman suggested.

I don't know what I'd do if I had kids. My friend pointed me at the Montessori method, which looks promising. I'd almost be tempted to home school them. But there is a genuine need for socialization. The smartest person in the world is completely ineffectual if they can't persuade other people to their way of thinking, a skill I continue to hope to learn. I don't know how one teaches that to kids though. Cooperative learning environments? Play groups? I don't really know.

Lots of hard questions, as there always are when I address education. And it's getting late, and it's time for this to be out of my hands, so out it goes.

posted at: 23:49 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sun, 09 Jan 2005

Predicting 2008
A friend of mine commented that Jeb Bush was a strange emissary to send to southeast Asia to oversee disaster relief. My friend also wondered why Colin Powell was along, given that he left the administration recently. My immediate thought was that the Republicans were giving Jeb a higher profile statesmanlike image to boost his chances in 2008 if they needed him. Jeb has said he's not going to run, but given the current state of the field, they may want to keep around as a viable high profile candidate just in case. And I could totally see Powell being there as a possible vice-presidential nominee.

That started me speculating on who the possible nominees for 2008 are. I honestly don't know. Nor does anybody else. I don't really believe that most of the candidates that MSNBC mentions have a chance. It's been demonstrated repeatedly that Senators make very poor presidential candidates. The number of compromises made in any omnibus deal opens you up to too many attacks, as John "Soft on Defense" Kerry found out this year. Where you really want an experienced senator is in the VP slot, where they can knock heads together to achieve a legislative agenda (think LBJ to JFK). In fact, in retrospect, the ideal Democratic ticket this year would have been General Wesley Clark with Kerry as his VP. Alas. I'd hoped for Clark from the beginning to no avail. He never put together a decent campaign team and couldn't even win his own state.

So ignoring the Republican senators, who's left according to MSNBC? A couple low profile governors (Pataki is lower profile than the mayor Giuliani, for instance). Not looking promising. There's always Giuliani or Arnie as possibilities, but that would mean alienating the evangelical conservatives, since neither Giuliani or Arnie are exactly pinnacles of moral rectitude. It will be interesting to see which way the Republicans jump on this - continue moving towards being the party of the evangelical right, or move back towards the center (Senator McCain might fall in this category as well). It will depend in large part whether the Arnie Amendment goes through; I think they would decide that Arnie was popular enough to declare their independence from the evangelicals. They have a bonus in that the evangelicals, at worst, will just stay home - they would never defect to the other side.

On the Democratic side, the drums have already started beating for Hilary. I think it's a terrible idea. I don't think the Republicans could ask for a better candidate to unify all of their different factions. She alienates the big business guys because of her attempt at health care reform. She alienates the evangelicals pretty much by existing (Lakoff had a great bit in Moral Politics where he described how Hilary basically violates every single Strict Father precept). There couldn't be a more polarizing figure. Not that polarizing is necessarily bad, given Bush's candidacy. But if you have a polarizing candidate, they better be able to mobilize 100% of your voters, and I don't think Hilary can do that; too many left-wingers have felt betrayed by the Clintons.

Edwards is a hopeless candidate, because he's not only a senator, but he's an inexperienced senator, so he has all of the downside and none of the up. Barack is too far off. Basically, I hate all my choices. So I'm going to toss out one of my own.

Eliot Spitzer. In 2012. The high profile attorney general of New York is running for governor in 2006. In the modern era, governors make the best presidential candidates for taking back control of the White House; after Nixon, we have Carter, Reagan, Clinton and Bush as the candidates that won back the White House. Spitzer has hard core credentials for fighting for the little guy on his side, taking on multi-million dollar companies. He seems like a pretty intelligent guy. If he wins the governorship, and does as good a job of general administration as he has in running his cases, I could see him as a very viable candidate in 2012. Long way off, though.

What to do for 2008? I don't know. I expect the Republicans will try to get the Arnie Amendment passed and run him. If that doesn't work, their fallback plan is probably Jeb Bush in a "I will serve my country if asked" kind of deal. The Democrats will probably nominate Hilary, because they have no other viable candidates, and she'll have the best political machine for the primaries. The Republicans will win, because the Democrats are idiots. So, yeah, 2012. Spitzer. Here's hoping.

Of course, I'm going to be continuing to keep an eye on this. In one of my fantasy worlds, I'll spend the next year or so scouting out the candidates, call it correctly in 2006, join the right candidate's campaign early, ride the campaign to a position of prominence and then be set for life as a political advisor or commentator. Isn't dreaming fun?

posted at: 09:33 by Eric Nehrlich | path: /rants/politics | permanent link to this entry | Comment on livejournal

Thu, 30 Dec 2004

Creationism
I don't know why I let it get to me. But the arguments of creationists just aggravate me so much whenever I see them that I feel compelled to post about it. This morning's aggravation was a result of coming across a link to a pointed criticism of an article by Phyllis Schlafly, where she starts off with "The most censored speech in the United States today is not flag-burning, pornography or the press. The worst censors are those who prohibit classroom criticism of the theory of evolution." The article is infuriating on many levels, but fortunately, the criticism addresses most of the outright falsehoods.

In the comments section of that post, somebody posted a link to this thread commenting on an article by Gary North, explaining that Christians should assert their majority status, and withdraw their children from any school that teaches evolution. The comments on that thread are similarly scary.

What scares me about it is that, first of all, I seem to be in a minority position. The majority of people in this country, according to a poll I saw, either believe in creationism outright, or are unsure of the evidence between evolution and creationism. I'm not sure I want to live in a country that chooses religion over science.

Secondly, I am horrified that the majority of this country apparently can not distinguish between pseudo-science and science. They accept what authorities tell them, and so they think that everybody must do the same thing; it's just a matter of choosing which experts to believe. They choose creationists, and others choose evolutionists, and it's just a matter of faith which you believe in. They think that evolutionists believe in evolution because a few scientists said so. They don't appear to understand the concept of peer review, that while evolution is a theory (as is all science - gravity is a theory too, but try jumping off a cliff to argue with that one), it is a theory consistent with the vast preponderance of evidence that has been found.

Creationists like to point out holes in evolution, saying that "Oh, well, it didn't explain this one thing, so the whole thing must be wrong." This betrays a total lack of understanding of science. A theory which explains everything, with no exceptions, does not exist. The entire history of science is a continual evolution of ideas, where theories are tried, exceptions are found, new theories are thought up that both explain the original data and the new exceptions, etc. Newton's laws morphing into relativity are a good example.

However, it seems like the creationists believe that a theory that doesn't explain everything is worthless. I feel that this is only because their alternative is something that explains everything: God. God is an easy answer. Of course God explains everything. But I feel that it's also a totally useless answer in this context. Creationism doesn't give us any insight into how our world works, any thoughts on how we can make our lives better.

Another frustrating thing is that several of the criticisms I read this morning basically say that "Creationism is believed by the majority of this country; therefore it should be taught in schools." Science by democracy. It's unbelievable. Do these people think that the physical world is swayed by what people think? If that were true, the earth would still be flat, and at the center of the universe. But it's not. Scientists like Copernicus figured that out, despite the vast majority of the people thinking they were wrong.

Scientists have to challenge the norm. If they didn't, there would never be any progress. Challenging the status quo is one of the greatest, most honorable things to do in science. All scientists dream of finding an exception, a chink in the best current explanation, because an exception is also an opportunity, a chance to do new science. It is most certainly not a reason to throw the explanation away, as the creationists would have us do. Heck, one of the reasons I dropped out of particle physics was that it looked like most of the work for the next couple decades is going to be theory-checking; the Standard Model is good enough at this point that it explains things out beyond where we can test and verify them.

The other thing that drives me bonkers about the comments I see from creationists in these threads is that they believe that their lack of imagination means that something isn't possible. Some of them at least concede that small changes are possible, on the order of moths changing color, or beaks changing size and shape, which is good because those are well-documented in our time. But then they say ridiculous things like "All changes lead to inevitable breakdowns of the system" (which starts by assuming that the systems are perfectly functioning to begin with), and "Well, we can see small changes, but the evolution of major changes is impossible" (meaning I can't imagine it). They don't have any conception of how natural selection works over hundreds of generations. The power of combinatorics and large numbers can lead to extraordinary changes.

One of the standard objections is the evolution of the eye. This is used by the creationists because Darwin himself says:

To suppose that the eye, with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree.

Creationists use that quote triumphantly, and say "Even Darwin doesn't believe that evolution can make anything complex!" Of course, if you continue to read, as I just did with the power of Google, he says:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case. No doubt many organs exist of which we do not know the transitional grades, more especially if we look to much-isolated species, around which, according to the theory, there has been much extinction. (emphasis added)

Of course, quoting out of context, and using selective reading, is just par for the course for most religionists.

Oh, and the reason I bring up the evolution of the eye is that I read about an experiment where two scientists set up a computer simulation that allowed for natural selection in an optical system (overview here). It started with a patch of light-sensitive cells, that could determine light or dark. It modified it at each step with relatively small mutations. And then it let the system evolve. At each stage, the system which could perceive an image best was "selected" and used as the basis for new mutations. In only 2000 steps, it had evolved from a flat patch that could only tell light from dark, into an eye cavity with a lens looking remarkably similar to what vertebrates have. They estimate it could have happened in about 400,000 generations or so.

This isn't proof, by any means. But it shows that even something as complex as the eye, which looks like it must have been designed, can evolve from very simple starting conditions. Because it is better. And natural selection favors good design. If it helps organisms survive, it gets selected. Lots of little changes, accumulated over thousands and thousands of generations, can add up to huge changes. It's a mind-blowing concept. And utterly inspiring to me, because it says that little changes can have an effect, in time.

Which is interesting, actually. Because now we've come across a weird congruence between religion and science. Religious folks believe that every little act matters, because God is watching, and "...whatever you did for one of the least of these brothers of mine, you did for me." Whereas I believe that little things matter because they can add up to big things, as demonstrated by evolution, or the butterfly effect in chaos theory. I'm not sure what to make of this congruence. Maybe it's coincidence. More probably, it's just a consequence of each person's search to believe that their life matters, when most of the time, it doesn't. So we like believing in fairy tales or anything which might give our life significance.

Anyway. I just had to rant for a while. Creationism does that to me. Science by democracy is bogus. Science by fiat is, too. Pseudo-science, where they use scientific terms but fail to embrace the scientific method, is infuriating. This country is quickly heading down a path towards the Dark Ages, not because we are turning our back on God and morality, as some would have you believe, but because we are turning our back on science and reason and the Enlightenment, which helped us make more extraordinary achievements in the last hundred years than were made in all of previously recorded history. We who believe in science must fight for it as vigorously and energetically as those who are fighting for their God of ignorance and deceit and children's tales. I don't know how to make this happen yet. Or where it's going to happen. But the battles lines are being drawn. And we need to have our weapons ready when it comes. Weapons of intellect, hope and propaganda. Because clearly reason isn't enough. Okay, now I'm depressing myself, so I'll stop here.

posted at: 09:58 by Eric Nehrlich | path: /rants/religion | permanent link to this entry | Comment on livejournal

Tue, 28 Dec 2004

Information Carnivore followup
As usual, Beemer had an interesting response to my last post. I was going to respond on LiveJournal, but decided to use my privileged position on the blog itself. Bwa ha ha ha. More people read it this way. Yeah. Not that readership matters. Because I'm more interested in the discussion. But if more people see it, then there's more likely to be discussion. Yeah! Um. Anyway.

Three things to follow up on.

  1. Text, as Beemer points out, is a great medium because it is random access and low-bandwidth. However, I wonder if this is known to be the advantage that we think it is. I think Beemer and I have both read so much, in so many forms, that we have the trick of using text as a random access, low-bandwidth medium. It's unclear to me that others know that trick. Many people, when confronted with a lot of text, just give up, rather than quickly scan through it to determine if there is anything of interest. Including me. I just downloaded a 15-page paper off the net on the theory that I'll read it later. Which won't happen. But I think that this lack of text-parsing ability may relate to the complaint I opened my last post with, which wondered why many people just give up when confronted with my long posts. So this text-parsing may be a skill worth thinking about, and eventually teaching, in addition to the critical thinking skill of parsing multiple sources of input.
  2. Speaking of which, in that post I was thinking of input in terms of text and alternate news streams, but I think it applies more broadly. While I was home at my parents' house, I was watching football, bouncing back and forth between two games, a skill which I've pretty much mastered at home using my picture-in-picture TV, but which was a bit trickier with only the "Last" button. My mom got annoyed and told me to pick one. I realized that the skill to handle multiple streams of input may be just as applicable in video or audio. And I'm not particularly good at it. I know people who are more habituated to TV who can have the TV on in the background while reading and listening to the radio, and still notice when something interesting happens. I think the generation of kids today is one step beyond with their ability to juggle video games on top of all that. It's a multi-modal environment, and developing the skills to handle that is just a matter of growing up in that environment, I think.
  3. Lastly, I wanted to follow up on Beemer's information carnivore observations. I had actually intended some of those analogies, but hadn't made the connection explicitly in the post. Part of the analogy is the greater efficiency in being higher up the food chain. Carnivores need to eat less often than herbivores. And he also observes the downside - a carnivore is exposed to greater concentrations of toxins. As far as the information carnivore goes, the greater efficiency of using secondary sources is necessary because otherwise the vast amount of information out there would overwhelm us. However, we are susceptible to greater concentrations of toxins, by which I mean biases and inaccuracies. At each level of the information food chain, there is a selection process. By the time it gets to, say, Rush Limbaugh, the "news" has been consistently slanted to the right so many times that it may hardly resemble the original story. I think the term information carnivore sums up these advantages and dangers concisely, and reminds us that we are dependent on others for processing information, thus reminding us of the biases and inaccuracies that may be built up by the time a story reaches us.

    It also reminds us that we stand at the top of an information pyramid. With the advantage that, if we choose to move lower down the chain, we can. We can open up the black boxes, find the original sources, do our own data compression, and determine whether it matches the summary that we were given. Obviously, we won't choose to do this often because it requires a lot of time and effort, but it's probably worth doing a couple times to find "information herbivores" that process data and stories into the form that we want. A simple example is finding a movie critic that we like. When confronted with a new critic, we read their reviews of movies that we've seen. We evaluate his opinion, compare it to our own, and when we find a critic that often matches our tastes, we begin to use their reviews to guide us in deciding which movies to see. Do the same thing for books, for products, for groceries, for news, etc., and you begin to see the information carnivore ecology at work.


posted at: 23:32 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sat, 25 Dec 2004

Information carnivore
Sometimes posts start with no more than a good post title. Like this one. Actually, okay, this post started with some thoughts I've been having about different ways of perceiving and handling information. It's something that's been in the back of my mind for a while. In fact, one of my first rants on this blog concerned the subject.

One of the things the concerns me with this blog is that a lot of people I know don't read it because my posts are too long. Part of the length is due to my lack of editing, to be sure. Part of it, though, is because I feel that these are relatively complex issues that I'm tangling with, and that it takes time to take them out, look at them, see them from a couple different perspectives, and decide what I think. It worries me that America seems to be moving towards the soundbite society, where we want simple answers that can be conveyed in a few seconds, where we don't have to pay too much attention. To some extent, I think that this last presidential election was a signal that the American public has chosen simplicity, even if wrong, over acknowledging the complexity of the world.

But I don't want to get distracted by politics here. Let's get back to information modes. One of the other realizations I had recently is that I am a product of a narrow slice in time. My preferred mode of information consumption is the printed word (or the word on screen). I am pretty adept at scanning web pages or books to extract the information I'm looking for. I can handle multiple sources of information, evaluate them and make my own decision as to which source I believe. Part of where I was going with the critical thinking section of my new directions post was trying to figure out how to teach this skill to others.

But what if the need for this skill is just because of this temporary phase that we're in, where information is stuck in a text format? I was surprised to learn of the rise of podcasting, because I dislike audio information transfer so much. But for others, it makes sense. And, as video and other multimedia editing tools become more powerful and common, that will start to dominate text, I'm sure. And people like me that grew up with text and are comfortable with it as a primary medium will slowly get passed by as outmoded. It's already starting to happen; on sports sites that I visit like ESPN, a good portion of content is being delivered in video rather than text, which drives me nuts.

So I wonder if the need for my skill of sifting through large amounts of text is one that is soon-to-be (or already?) outmoded. There's been this eight year run or so where the World Wide Web made everything available in text, and really made having such a skill valuable. But soon it's going to go be only relevant to those of us that read books. Is there really value here, or do I just think it's valuable because, well, I do it? Am I already doomed to the long slow decline of technical obsolescence that Douglas Adams describes for those over 30?

However, there's a related skill that I think will continue to be useful. The term I came up with this morning (and the one that inspired me writing a post at all because it made a good post title) was being an "information carnivore". It's taking information that others have already processed and finding ways to use it. I'm not much of one for primary sources. The amount of effort it takes to learn the specialized language of Derrida or Foucault is just not worth it to me to find out what they say. So I read secondary summaries, whether in books or online, synthesize them, and extract information for myself, consulting the primary sources as necessary to elaborate upon a point.

My new directions post included a section on thinking about helping teach others the critical thinking skills necessary to be an "information carnivore". I haven't really thought through the details yet. Beemer made an interesting comment, suggesting that the way to teach people the way to do something is to put them in a situation where they need it. Offhand, one way of doing that would be to get away from the teacher in a classroom being a voice of authority, and more towards a discussion leader. Provide alternative viewpoints, including mutually exclusive ones, and require students to determine which viewpoint to believe by taking into account other information sources. Grade them on their ability to make a good case for their viewpoint, not necessarily on having the "right" answer.

This wasn't as coherent as I thought it would be when I started, partially because I'm distracted by writing this as I'm watching football on Christmas afternoon (yes, I've finally found a way to combine my hobbies). I'll post what I have, and think about it some more. Comments welcome, as always.

posted at: 17:44 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Wed, 15 Dec 2004

Social context in the Monkeysphere
I'm going to cheat here, and respond to one of Beemer's comments in the blog itself rather than with another comment. Mostly because he brought up some points I wanted to address but hadn't gotten around to. This is what I meant when I mentioned that I had a whole big ball of ideas that I was going to tug on a loose end of and see what came tumbling out.

Beemer points out that his Monkeysphere appears to be a lot larger than 150 people. And that perhaps it didn't matter how big a person's Monkeysphere was, but what the shape of the monkeyfunction was. This makes sense. No, really. Well, maybe it doesn't, but try to keep up anyway. I knew what he meant. And I'm going to tackle both of those assertions separately.

I first came across Dunbar's number (the limit of 150 to human organizations) in the book The Tipping Point, and it's fascinated me ever since. I think that it's not necessarily an absolute limit on how many people an individual person can know, but it is a fairly strict limit on how large an organization can get before social feedback mechanisms no longer work. In other words, beyond 150 people, you need to have a structure or hierarchy or some sort of management organization to make things function because otherwise stuff will fall through the cracks, and people won't care because it is affecting people outside of their monkeyverse. I glancingly addressed this in post about different management structures a while ago, so I won't get into it here.

So how many people can one know? Know in the sense of feeling like if you ran into them at a bar, you'd acknowledge them, say hi, be able to talk for a bit about friends and/or family. It's probably more than 150. One of the keys here is, wait for it... context. You knew I was still on that hobbyhorse, right? I think one of the keys to the expansion of our monkeyspheres is taking advantage of different contexts. I know a lot of people that I consider friends, but only within a certain context. I have folks I know from the chorus, who I often go out to dinner with after a concert, but never interact with them outside of chorus. I have a similar relationship with folks from my ultimate frisbee team. Or from work. Then there are friends who have jumped the threshold and have become part of all aspects of my life (there's a whole separate post which I've thought about writing about why it's difficult for me to achieve that sort of crossover, and what I can do to make it easier to deepen and strengthen friendships so they jump the threshold of the context in which they are started, but I haven't figured it out yet).

Within each of those contexts, I may know only 100 or 150 people, but overall, I can know more because I use the contexts to keep them straight. Or something like that. There's always that weird moment when you meet somebody in a different context, and sometimes you don't even recognize them. I've definitely had that experience a couple times when I'm wandering around San Francisco, and somebody from my ultimate team says hi, and I do a double-take and need a reminder of who they are - they look familiar, but my brain can't place them because they're outside of the context within I normally interact with them.

To address Beemer's second post, I don't think the shape of the monkeyfunction matters so much as how we handle people outside of our monkeysphere. Even if the limit is closer to 1000 than 150, it's still well short of the millions of people in a nation. Or the billions of people in the world. How do we handle that case? I had an interesting speculation about that today (I was sitting in meetings all day today, so I had plenty of time to think about responses to Beemer's comment).

The way in which we handle the case of America seems to be that we have created a "friend" called "America" which we include in our monkeysphere. And anybody else who's "friends" with "America" is automatically included in our monkeysphere. This takes place at lots of levels; for instance, I definitely have a soft spot for fellow MIT alumni, even if I don't know them at all, just because I feel we have a shared experience. We share the same "friend", "MIT".

It actually reminds me of the Fakester phenomenon on Friendster, where people were creating fake personas such as New York City, or the Giant Squid, and connecting to each other via these Fakesters. I wonder if this was just a concrete manifestation of an everyday phenomenon, where we use institutions such as America or MIT as friend placeholders to expand our monkeysphere to handle the social institutions that we have that are much larger than Dunbar's Number.

This leads to the question of how do we design better Fakesters, i.e. how do we create institutions that do a better job of binding us together? In politics, how do we use such things to bridge political divides? Or how do we use them to help create world communities as opposed to resolutely nation-state-oriented institutions? And, if you've been reading my blog for a while, you won't be surprised to hear that my guess is that stories are the answer. Stories are what bind communities together. Stories give us the protagonists that we can use as Fakesters to expand our monkeyspheres. The country of America is nothing more than a shared story, starting from the Founding Fathers, through the Civil War and Abraham Lincoln, through WWI and WWII and the Greatest Generation, and JFK and Camelot, and Vietnam, and a story that is collaboratively being created anew every day by its citizens. It's a shared dream.

So that's my response to Beemer's comments. And, just as a note, I know a lot of my posts recently have been less than polished. There's been enough stuff backed up in my brain that I decided it was better to just start getting some of it down rather than try to find the one angle by which it would all fall apart neatly. So apologies for some of the incoherence.

And also understand that this is a work in progress. To some extent, this blog is an excuse for me to publicly map out my brainspace, and I'm very interested in getting feedback. If you don't want to comment on Livejournal, please feel free to drop me an email to the address at the bottom of each post. Thanks to all who read. I appreciate the fact that you're interested in what I have to say. Okay, I'll stop babbling now.

posted at: 22:50 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 14 Dec 2004

Truth vs. Context
Beemer had an interesting response to my last post, which he called Truth vs. Context, and I'm going to steal that for the title of this post. As a warning, I have a ton of ground to cover, and this entry is probably going to span at least four posts, if not more. Just the notes I scribbled out and emailed to myself were over a page. God help us all.

I agree with Beemer that there is an objective physical reality. I mean, I was trained in physics: how could I disagree with that statement? After all, somebody once quipped "Reality is that which, when you push against it, it pushes back." (if anybody knows who said that, please let me know - I am clearly paraphrasing it because I couldn't find it on Google) However, I don't think that helps us in this case. In fact, modern physics actually demonstrates the importance of context. Not only that, but scientific philosophers such as Kuhn and Latour have demonstrated that even "objective" science has a large degree of subjectivity in it when it comes to the existence of paradigms and black boxes that nobody questions.

But objective physical reality only goes so far. When I drop an object, it falls to the ground. I can repeat that experiment over and over again and be assured of getting the same result. However, the same is decidedly not true of social interactions. For instance, if I asked somebody "Do you want $2 or $0?", you would think the answer would always be "$2" (I just heard an echo of "I want my $2" in my head). But it's not. It depends on the context.

In fact, history demonstrates that there are virtually no impossibilities when it comes to social interactions. We've tried it all. Dictatorships, democracies, anarchies. Cannibalism. Matriarchies, patriarchies, hierarchies. Any rule that we think we can put our finger on and claim is universal, there has probably been a society somewhere in history that did the opposite. Anywhere I go on this planet, I'm pretty well assured that if I hold a book three feet off the floor and let go, it's going to drop to the floor. I have no such assurance about language. Or customs (is it polite to belch?). Or hand gestures (do you know the equivalent of the middle finger in Korea?). In all of these things, context matters.

Given the fundamental relativity of such things, there looms a larger question: given two competing social contexts, how does one decide which one is "better"? In an Enlightenment universe, reason would determine everything, but I think that reason is fundamentally limited here because reason is a tool; it can not determine overall goals. To give credit where it's due, many of these thoughts were instigated by the discussion over at Dave Policar's journal, particularly his comment trying to reconcile opposing concepts of how things should work. This whole post is essentially an attempt to examine some different ways of reconciling such opposing concepts, in part by evaluating the contexts in which they make sense. In light of the specific point I was making, the separation between goals and execution was also articulated in that discussion. I believe that reason is a good tool with which to evaluate alternative execution strategies. However, it's unclear to me that it can be similarly used to evaluate social goals.

This also explains the schism I mentioned in my last post between the Postmodernist Left and the Enlightenment Left. They are covering two separate areas. The Enlightenment Left covers the physical world. The Postmodernist Left covers the social world. And the tools appropriate for one world do not transfer easily to the other. It's "Truth vs. Context", the title of this post.

So, given two competing social systems, two opposing contexts, how do we choose one? For instance, how can we decide between the Strict Father and Nurturant Parent models of Lakoff's Moral Politics? Lakoff takes a stab at resolving that at the end of the book, but he essentially just puts down his opinions to decide in favor of his progressive politics.

I'm not sure there is a way. Any metric we choose to decide between them can be dismissed as biased because everything in the social universe is biased by definition by the chosen context. I've struggled with this question before. The conclusion I came to that time (also with Beemer's help) was that perhaps it could be demonstrated that "Good" systems reinforce themselves, whereas "Evil" systems eventually annihilate themselves. In other words, "Good" systems are infinitely sustainable and create a virtuous circle. How one goes about showing that is a really good question.

I was going to go on and start trying to apply some of these ideas to ethical systems, but I've been writing here for about two hours, so I think I'm done for the evening. I'll try to get back to it tomorrow. The number of branches of investigation available along these lines is dizzying; hence the four or five emails I sent to myself today with different paths to explore. Or I may get bored with it and go explore something else entirely.

posted at: 22:52 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Context in modern physics
I mentioned that modern physics actually demonstrates the importance of context. This is a total aside, which is why I'm putting it in a separate post (think of this as a long DFW-esque footnote), but while I was contrasting the "objective physical reality" with the contextual social world, I realized that the objective physical reality is a thing of the past. In an earlier post, I mentioned the "Enlightenment Left", which believes in the preeminence of reason and the ability of logic to conquer all. They envisioned a clockwork universe, set in motion by a Prime Mover, and following Newton's Laws throughout time, where if one knew the positions of every particle in the universe at a given time, and had enough computing power, you could then predict the position of every particle until the end of time.

However, we now know that is not possible. Chaos/complexity theory has demonstrated the extreme sensitivity of systems to initial conditions, as is most famously illustrated by the butterfly effect, where a butterfly flapping its wings could cause a storm halfway around the world. In other words, to stretch a metaphor too far, chaos theory demonstrates the important of context (initial conditions) in even something as prosaic as simple classical mechanics.

Quantum mechanics is another area of modern physics that can be construed as demonstrating the importance of context. The two slit experiment is a good example. The photons somehow "know" whether the other slit is open or not, and decides whether or not to create the interference pattern or not based on that information. You can come up with all of the "probability wave" explanations you want, it's just spooky and counter-intuitive. And I won't even get into the EPR paradox and entanglement, mostly because I don't really understand either. But it all points to the futility of trying to analyze a system in isolation, without knowing everything else it is interacting with, its context.

posted at: 22:44 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

The Ultimatum Game
I mentioned that there would be cases when people would answer the question "Do you want $2 or $0?" with "$0". This is what actually happens in the Ultimatum Game, described here, with references. The basic idea is that there are two players, asked to split up a pot of money, say $10. The first player gets to decide an appropriate split between the two players. The second player is given the option of accepting the split as offered, or rejecting the split, in which case neither player gets anything. In the isolated case of "Do you want $2 or $0?", a responder would almost always take $2. But if the responder is playing the Ultimatum Game, and they know that when they're getting $2, the first player is getting $8, half of all respondents reject the $2 and take nothing.

This is a fascinating result to me. It demonstrates the all-encompassing importance of context. In an isolated context, people answer one way. In a social context, they answer differently, where feelings of fairness are brought into play. In fact, one study used MRI scanning to demonstrate that unfair offers activated a part of the brain that is associated with negative emotions, including, one would assume, spite. The paper goes on to point out that the MRI results demonstrated a conflict between "the emotional goal of resisting unfairness and the cognitive goal of accumulating money."

One might wonder where humans have learned what "fairness" is, and why it is built into our brain chemistry. This paper gives some insight into how such an instinct evolved. In it, they run computer simulations and demonstrate that the fairness instinct can evolve in the Ultimatum Game if participants are given a history. If it were a one-off game, the first player would always make the split uneven, and the second player would decide that something is better than nothing. However, if there are repeated iterations, the second player can spite the first player by holding out for a "fair" split, and enhance the likelihood of getting a better deal in the future. In other words, fairness only matters when you are likely to interact with the same people repeatedly - "When reputation is included in the Ultimatum Game, adaptation favors fairness over reason. In this most elementary game, information on the co-player fosters the emergence of strategies that are nonrational, but promote economic exchange." And the MRI studies demonstrate that such strategies, such feelings of fairness, are actually built into our brain chemistry.

This leads to an important result in my mind. Because we are primates, and prisoners of the monkeymind, things like fairness and social justice only matter when we are dealing with those within our monkeyverse. If we are dealing with people within our social universe, people who we are likely to run into again, even if they are only Familiar Strangers. We don't rip off the guy at the corner convenience store because we stop in there regularly. We pay our fair share at dinners with our friends because we know we will be going out to dinner with them again.

However, if we are dealing with strangers, with people we don't feel are part of our world, and with whom we will not have to interact with again, then all the rules of fairness go out the window. We are returned to pure self-interest. It's like a one-off game of the Ultimatum Game. We feel fine cheating the people we don't know, because in an emotional sense, they aren't people to us. They don't evoke our rules of fairness. They are objects in the world, to be used and disposed of.

How do we expand our monkeyverses, keeping us from doing stupid things like stealing from strangers, committing hate crimes, and invading foreign countries? My answer is probably not surprising: we use stories as a way of giving us the details about other people that change them from cardboard cutouts into people. By turning them into real three-dimensional people, stories can activate our monkeybrain and all of the accompanying emotions of fairness and guilt. Such emotions leverage the way our social brains have evolved, hopefully getting us to treat each other better. It's a theory. And one I'll probably return to at some point.

posted at: 22:17 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal