Tuesday, April 17, 2007

Will Imus Keep a Conservative From Taking ’08, or Pave the Way for One?

As everyone jumped on board the “Fire Imus” bandwagon last week, I experienced a very familiar feeling: a violent futility that mirrors the catch-22 in which conservatives often find themselves. I wanted to yell out, but realized that doing might help fulfill the stereotype that conservatives are angry loudmouths. I wanted to defend him as others have by comparing his comments to what we regularly hear in hip-hop. But others might see me as fulfilling the “conservatives are racist” stereotype. Whatever my reaction was, I realized I was in a catch-22, where any action that seemed natural would “fulfill” the false stereotypes that leftists have successfully generated about conservatives.

I experience the same thing at my liberal academic institution, and even from time to time in the church, where things are more political than ever. If I say that racism as it is defined by the academy isn’t as bad in America as it is in other places, I’m part of the problem, because I am denying and suppressing the truth of racism. If I argue that we, as a society, have an obligation to help those in need, especially minorities, then I am paternalistic, and my help isn’t wanted. It seems that the only real way to have traction is to take the “low road”, which is what we used to call the “high road”. In our easily-offended culture, the only way to show that you are truly above-the-fray, that you are morally above reproach, is to champion every cause that “helps” the underdog, and pounce without mercy on the perceived racists and bigots of the world.

So conservatives may have a harder time, maybe a much harder time, in the future to defend their ideas. Think welfare reform was hard in the mid-90s? Who would want to touch it these days? Think immigration reform is likely now? If Imus can get fired for what he said, the labels “racist,” “white supremacist,” etc. would surely be in play for those advocating reform, and few politicians have the will to follow through. I would argue that unless there is enormous push-back among voters and consumers, the sensitive nature of political correctness in America Imus exposed will make it much harder for a conservative to now be elected president. After all, who among the politically indecisive would want to ally themselves with Imus by NOT voting for Obama or Clinton? In other words, ideas may be losing their power; now, it’s all about posturing.

Or, there may be an enormous vacuum that a strong conservative can fill. If the frustration over what happened to Imus and the subsequent debate about hip-hop and hypocrisy in the media builds, an outspoken conservative may be able to awaken the sleeping Republicans. Not that I am very confident any significant legislation would emerge, but it may be better than the alternative. Perhaps the backlash over the Imus debacle could actually lead to immigration reform, but only if conservative politicians show political will they have so far been unwilling to exhibit.

In the end, ideas must continue to be of primary importance, not the opinions of others. The strong, silent type so perfectly seen in the likes of Howard Roark is becoming a more and more hated commodity, and far too many are taking the low road of championing causes in speech only. If all it takes to be above reproach is to say the right words, which may or may not be true, our standard for critique has become dangerously low. I propose we ignore most of what people say, and focus on what they do, because talk is cheap, for better or worse. As lofty as the Sermon on the Mount was, it would have meant very little if Jesus had never healed the sick or raised the dead.

Friday, April 06, 2007

It was a Long, Cold Global War on Terror/Terrorists/Terrorism

I have become increasingly confused as to what to call the war, or the series of battles, or the skirmishes in which our military, or intelligence, or coalition forces are engaged in in Iraq, or Iran, or the Middle East, or world, against terror, terrorists, radical Islam, or haters of freedom. Now that I have made it clear what I am confused about, let me express my real confusion: how has a false threat (extreme climate change) overtaken a real threat (terrorist states with nuclear capabilities)?

I am reminded that the first stage of grief is denial. We saw a pretty clear case of denial this week when the House Armed Services Committee banned the phrases “global war on terror,” and “long war.” As the offensive “surge” seems to be working in Iraq, and Iran is starting to show its vulnerabilities by taking hostage, then quickly and oddly releasing 15 British marines and sailors, it seems House Democrats need to deny any of these signs of progress. The best way to do so is to ignore the reality in which we are engaged: a long, global war on terror. It reminds me of the victim in horror movies who repeat lies to themselves over and over for comfort: “He’s gone,” or “It’s going to be okay,” or “It was just a bad dream”, all the while the audience knows a madman with a knife is hiding behind the curtains. So while they are quite literally denying the real war, they are embracing a false one.

It seems inevitable now that Congress will eventually act on the “growing consensus” that extreme climate change is both man-made and reversible. Perhaps the next President will sign us up for the Kyoto Protocol. In the meantime skeptics of global warming, like Michael Crichton, are articulately and patiently voicing doubt. His State of Fear makes several particularly salient points with regards to the faulty science that so assuredly predicts global disaster. But perhaps more importantly, he points out that we have essentially replaced our fear of the Cold War era with our fear concerning environmental destruction. For instance, he points out the massive upswing in words like “crisis,” “catastrophe” and “disaster” in news media since the end of the Cold War, apparently to fill the vacuum of fear. Consider this excerpt:

“There was a major shift in the fall of 1989. Before that time, the media did not make excessive use of terms such as crisis, catastrophe, cat­aclysm, plague, or disaster. The word catastrophe was used five times more often in 1995 than it was in 1985. Its use doubled again by the year 2000. And the stories changed, too. There was a heightened emphasis on fear, worry, danger, uncertainty, panic.”

Crichton claims the rise in these words is directly linked to the fall of the Berlin Wall, and the vacuum of fear that was then created. With the Cold War over, what would we wring our hands over?

But according to this insightful article in the Wall Street Journal, our face-off with Iran is eerily similar to the Cold War. So what seems to have happened is this: we lived with fear during the Cold War, even learned to adjust to it. We won the Cold War, but had a vacuum to fill, so we have with a classic fear-mongering scam, global warming. But now that we face a very real threat again in Iran, we are incapable of dealing with it because other, larger threats like environmental degradation have diverted our intention.

Worse, they have relativized our fears: it’s allowed many to say, “What’s worse: One lunatic in the Middle East or the extinction of life on earth?” This false relativization has prevented us from switching gears yet again: the tide seems so firmly focused on the false war of climate change, we can’t focus on the real one, the Long, Cold, Global War on Terror.

Worthy of note as well are the similarities Iran has with Hitler's ideology, mainly in the lack of fear concerning national suicide. In a culture where martyrdom is upheld as a social virtue, we have lost have similar bargaining power that we had with the Soviet Union, who did at least seem interested in national survival. Did our victory in the Cold War convince us we cannot be beat? If we continue to discredit the fear of global warming, will we be able to focus again on Iran, or similar nation states with evil intent?

I reckon I am glad it is Holy Week: at least I have theological answer to all of these questions, if not a political one.

Monday, April 02, 2007

Relievedebtor Echo Syndrome? “Christian” Under Attack Again

This is hardly an echo of anything I wrote, but in November, I posted some thoughts regarding the viability of the word “Christian” in a Postmodern context. Because America largely sees itself as Christian culturally as much as spiritually, the label seems to be losing its vitality, and needed to either be reclaimed or replaced. I suggested “Disciples”, “Followers of Jesus”, even “Apostles”, all of which have limitations of their own. In the end, I’m not sure how much titles matter: “A rose by any other name…”

Anyway, this article made two salient points on the question, this time regarding Evangelicals specifically. Because of fracturing within Christianity and even within Evangelical circles, it is harder, if not impossible, for that term to define a core of beliefs. What is it that unites them? Scriptural inerrancy? The movement of the Spirit? Worship style? Global warming?

Yes, apparently global warming, as a “subset” of stewardship of creation, is a hot-button issue for some Evangelicals. These are environment-friendly conservatives who may agree with Al Gore that caring for the environment is now a “moral” issue. But is there more behind this recent embrace of environmentalism? I have a sneaking suspicion at least two major forces are driving this unlikely union, besides legitimate concern for stewardship of creation. (I personally do not equate stewardship of creation with massive economic regulation.)

First, Evangelicals are tired of being taken for granted by Republicans. As David Kuo points out very well in Tempting Faith, Evangelicals have been seen as guaranteed votes by the Bush administration, and it’s worked well for Bush. Spoken about in the same language that minority groups have been spoken of concerning voting for Democrats, Evangelicals will simply not go down that road. In other words, Evangelicals are too smart to just be pawns in a nasty political game; they want to carve out their own niche. Second, in true Postmodern form, Evangelicals are becoming more liberal at heart, which to my mind is not altogether a bad thing. Major divisions in Evangelical circles around the ordination of women and the accuracy of the Bible have already taken place. I would not be surprised if the homosexuality question isn’t next. Whereas that question has been debated for many years in mainline Protestant circles, Evangelicals to my mind have hardly entertained conversation. That may soon change.

So like the word “Christian”, the word “Evangelical” is quickly losing value. In the words of the author Paul Chesser, “One historical credo for traditional evangelicals is that they stand on the truth, first grounded in the Bible, and secondarily in measurable, incontrovertible evidence. Human-induced global warming doesn’t pass either test...If evangelical is allowed to go the way of Christian, we may need to develop yet another identifier. Conservagelical, anyone?”

Thursday, March 22, 2007

The Economist and the Land: Lessons from Milton Friedman

A few months ago I came across a documentary on PBS recounting the life of the economist Milton Friedman, one of my true intellectual heroes. The production was mostly sympathetic about his accomplishments and his personality, and it revealed aspects about his personal life. One thing that intrigued me as I watched the documentary was the footage that showed Mr. Friedman at home. Rather than spending his personal time near the campuses in which he taught, Friedman and his wife chose to spend the spring and summer months in their rural second-homes. Thoughout most of Friedman's career he and his wife would return frequently to their house in Capitaf in Vermont.

What I remember from the images of the family enjoying themselves at their second residence was the architectural distintiveness of the houses. The "Capitaf" in Vermont was a very modest structure, nestled among the trees that comprise much the forest that covers the property. The main living room featured a panoramic view of the valley below made possible by generous expanses of glass, thin framing. Inside was further connected to the outside with deep overhangs offset from the windows, which emphasized the house's overall horizontal proportions, similar to the Prairie Style houses of Frank Lloyd Wright. I remember the house's materials being simple, reflective of its site and probably the regional vernacular. Milton and his wife Rose would return to Capitaf for more than 30 years, which attests to a true love for the land and his resolve to dwell in a structure respectful to the landscape.


When he was made a fellow at the Hoover institute in Stanford, the Friedmans chose a secondary residence in California that possessed the same kind of relationship with its environment that their home in Vermont had. He states in his autobiography:


"Initially we continued to spend spring and summer quarters at Capitaf, our second home in Vermont. However, we soon came to appreciate the inconvenience of maintaining homes a continent apart and began to look in California for a replacement for Capitaf."


110 miles north of San Francisco off the Pacific Coast, the Friedmans came across the Sea Ranch condominiums and decided that place was optimal for their needs as scholars who love the outdoors. When I learned that he and his wife lived in the condominium community for many years, I was impressed that such a landmark building was so loved by one of the world's most admirable thinkers.


First completed in 1965 Sea Ranch condominium complex is considered by many architectural history surveys as one of the first examples of a “post-modern” style. Counter to principles of universality and uniformity that were embodied in the Modernist movement, the design of Sea Ranch was driven by its surrounding context, being sensitive to a place’s natural and historical particularities. The complex was Charles Willard Moore’s first triumph, and helped establish his reputation as a prophet for the post-modern architectural movement. Moore would go on to run a handful of successful architectural practices, earning the reputation as a humble designer and writer who eagerly collaborated equally with his colleagues and who enjoyed a long career as a teacher. Sea Ranch marked his first major phase that was preoccupied in reinserting formal variety, designing simple details and specifying modest materials. This phase was later succeeded by another that meditated on the role of architecture as a language and ironic metaphor, best depicted in his design of the Piazza d'Italia in the late 1970's.


Whereas a typical modernist response would feature flat-roofs, texture-less walls of concrete or stucco, and large planar glazed walls emblematic of the International Style, Charles Moore’s approach consisted of adapting the formal typology of the North California barn (one of which is found on the site), wrapping the walls with weathered wood siding, and orienting each unit to take advantage of natural day-lighting and breezes available on the site. Projecting bay windows frame views of the Pacific Ocean while providing an intimate scale for the occupant. Each unit is unique in plan and elevation, but such heterogeneity is tempered by Moore’s use of a reductive vernacular, the materials, and repetition of shed roofs. The landscaping is limited to keeping the native grasses short enough to prevent brushfires, accomplished by using a herd of grazing sheep, further emphasizing the historical identity of the site. One’s overall impression is of a quiet village firmly rooted in the landscape, as if it had been there for a long time.

What must have attracted Milton and Rose Friedman to Sea Ranch was the calmness and privacy inherent in its design. The gentle terracing of the units along the sloping bluff, it’s the almost parallel relationship between the roof pitch and the incline of the land helps express a quiet humility in its environment. The lack of porches and deep overhangs are opposite conveys a desire for occupants to be left alone. It’s the perfect environment in which to reflect on all kinds of matters important to such a man like Mr. Friedman. He could pursue his research with complete concentration there. He ended up living over two decades at Sea Ranch before living there became too inconvenient in his old age. He wrote:

"In 1979, we purchased a house on the ocean in Sea Ranch, a lovely community 110 miles north of San Francisco. In 1981, we disposed of Capitaf and began to spend about half the year at Sea Ranch at intervals of a week or so, spread throughout the year, rather than in one solid block. It proved a fine locale for scholarly work. The Internet plus an assistant at Hoover more than made up for the absence of a library near at hand."

Sea Ranch was probably one of the best-known examples of what is now called “green architecture”. It didn’t incorporate as many environmentally-friendly materials as what is available today, but its response to the site’s microclimate, its use of local renewable materials for its structure and exterior as well as its application of native vegetation are common “green” strategies.


To my knowledge, Friedman was not the kind of environmentalist in the sense that he would have favored rationing production by government decree. What seems apparent from his time spent at his Capitaf retreat in Vermont and at Sea Ranch, was that he enjoyed nature for its spiritual and emotional power. He seemed to relish in its simplicity and calm, which probably helped ensure his long life and his undiminished sense of humor.
Update: Welcome Marginal Revolution readers! Feel free to explore other postings on the site. With a Franco-American architect and a classical guitar-playing Lutheran pastor as your writers, there's lots of stuff that you might find interesting...

Tuesday, March 20, 2007

Is Our Lack of Manners a Return to Primitivism?

I was fortunate enough recently to visit Paris, and I was reminded of the importance of manners. Especially in a place where one doesn’t speak the language, it is crucial, if you aim to be a polite guest, to be sensitive to the mores and traditions of the host. Most of this can be accomplished with basic manners. So my wife and I went out of our way to not be the ugly American, to observe Paris for exactly what it is, and respect it as we found it. The Parisians seemed generally happy to accommodate.

First, some comments on the French: they get a bad rap. Of course, I was a tourist in tourist areas, and perhaps it was in their best interest to be polite to me. After all, Americans are used to tipping, and the French don’t seem to find it patronizing anymore. (We found a 10-15% tip was the norm indicated on the menus.) The rumors of rudeness I had heard proved unfounded for me, even as a non-French speaker. Everyone from wait staff to pedestrians was willing to help, and often spoke English without me even asking if they could in the broken French I remembered from high school and college classes. (It was hopelessly obvious I was an American tourist, complete with a large camera on my shoulder. I have no shame.)

Manners, I was reminded, are the oil of societies. As much as children resist them in their more rebellious years, and as much as societies in general revolt against them from time to time, I see the wisdom in highly regarding a polite society. Criticisms of such societies usually run along these lines: to have excessive manners is to mirror the ruling classes, and hence give them tacit authority for ruling. This is why kids rebel against the manners of their parents, and why entire cultures change their mind about manners. (We no longer curtsy, for example, which is probably a good thing.) Postmodernism has taught us to distrust authority, and consequently, it seems we distrust the rituals, however minor they may appear, that are complicit in such trust.

For example, I am not sure we show the kind of deference we used to for veterans, our elders or authorities. If respecting authority is no longer a recognized value, it makes sense that manners could be seen as disposable. Pastors, politicians, and parents are all traditional authoritative figures that are subject to question, perhaps more now than ever. As I contemplate walking in my father’s footsteps to record my grandparents (all four are still living in their mid-80s), I wonder if my children will have any interest in listening to what they have to say?

I should add some context to my Paris trip as well: before I left, I saw a newscast about the way teenagers have descended to, shall we say, “primitive” forms of dancing, rubbing their hips against each other before so much as exchanging names. I also have mourned anew the popularity of rap music, now that I have finally admitted to myself that it is not a fad, and children from affluent, moral homes cherish to the most base music produced. I know this is nothing new, and I recognize I’m a bore with such topics, but I can’t help but think that these overly sexual night club rituals are but two examples our erosion of a polite society. To disregard manners is to disregard authority. To disregard authority is to lose self-governance. To lose self-governance is to begin the path to primitivism.

I thought about this as I toured Versailles. Unlike the usual shred of contempt I hold for the monarchies in such places, I actually found myself defending them. Of course, I’m a republican (little “r” before big “R”), in the true sense of the word, not an advocate for monarchical power. But at least this place had manners, if nothing else. Absurd manners, sure, probably even oppressive manners. But Versailles and its opulence didn’t look so bad when compared to tribalism our society seems destined to romance.

All of these thoughts from a little trip to Paris. Who would have thought I would leave Paris praising them for their manners and chiding America for its lack thereof?

Thursday, March 08, 2007

The President and his Land: Lessons from the Architect of the "Western White House" in Crawford

As stories of Al Gore's profligate energy use for his mansion in Nashville have circulated, some bloggers have made mention of the environmentally friendly design of the president's ranch in Crawford, Texas. The former Vice President's practice of offsetting his energy consumption with carbon credits brings to light one way which one proposed way of leading a "green" lifestyle. Paying for the right to emit greenhouse gas in exchange for promises to absorb these gases elsewhere seems much easier than having to change one's lifestyle by rationing energy use and integrating natural processes in our daily lives. Carbon trading seems to me more a matter of abstract bean-counting rather than living a life in greater direct harmony with our natural surroundings that defines the ecological lifestyle.

From my perspective, the greatest benefit in employing green design is the opportunity it avails in strengthening the bond a dweller has with the spiritual qualities of the site. There's a bit of a transcendentalist influence to this idea, but I find that those who demonstrate a true appreciation for nature are those who actually own property on which that nature is found. As property, the natural assets within it has a real, not just transcendent value. The fact that Bush's ranch house features "green" design has less to do in conserving marketable resources on his property than with a desire to demonstrate a genuine love for his site.

How do I know this? I was a student of the architect that designed Bush's "Western White House". David Heymann, as talented a professor as he is a designer, shared numerous stories to his students about the experience for working for "the governor" at the time. Bush's ranch house was planned at the beginning of his second term as governor of Texas. Before that commission, Heymann was engaged more as an academic than a practicioner, having designed a handful of small projects in the midwest and Austin. By serendipity and extremely loose connections with the Bush family, he was entrusted by them to realize their permanent retreat in Central Texas.

Heymann was responsible for teaching site design at the school, and pushed students to ask thoughtful questions about the meaning of landscape, context, and a building's relation nature in the technical and spiritual sense. As the ranch was being framed in Crawford, Heymann would show pictures of its construction and providing details on the systems and materials being used. Although he confessed that his politics were on opposite spectrum of the governor's, he revealed that he has never had a better working relationship with a client than George and Laura. The future first lady made sure that the architect's needs were attended to, even calling him to see if he had arrived safely back from his site visit.

Part of why Heymann enjoyed working for the Bushes was their willingness to listen. As a self-described environmentalist, Heymann was determined to intergrate sustainable design into the new ranch. George W. Bush heard his argument in a one-on-one meeting, letting Heymann make his case. From my recollection, the architect presented the contention that if you love the land that you inhabit that the governor surely did (and continues to return to often during his presidency) you will do what is necessary to preserve its essence, and implement a design that does not intrude upon the landscape. The resulting single-story home is rather small from what it could have been at only 4000 square feet, with heating and cooling partially provided by geothermal technology, rainwater is collected and stored for irrigation. It also resulted in a funny story that involved Laura accompanying the architect at a store searching for water-efficient toilet fixtures.

Construction photos revealed a shallow plan oriented along the path of the sun. A deep roof overhang shades the south-facing facade as well functioning as covered porch that runs around the entire perimeter of the house. The width of the house is limited to single room which uses the perimeter porch as an important means of circulation along the house's length. The porch thus serves as a breezeway, a space that mediates the transition between inside and outside. This transition between the sheltered spaces and the surrounding landscape is made more seamless by the maintaining of grade level throughout. The house does not sit on a raised podium, but anchors itself firmly within the lay of the land. A gravel-filled moat surrounding the porch floor collects water runoff while delineating a sensitive threshold between the house and nature. Copious floor-to-ceiling windows provide each room with distinct views, further tying the indoors with the outside. In an effort to tie the house to the surrounding region as a whole, the house is clad in left-over limestone from local quarries, which exhibit a rich variety of colors often eliminated when processed as a clean, cream-colored masonry. The metal roof, while contributing to a higher albedo (refection of light back to the sky), it evokes the use of sheet metal in barn buildings and sheds in Texas.

Heymann notes that there is a current trend which involves the movement of people who are moving out of the cities to rural sites in order to recapture a sense of calm and the replenishing energy from nature. Like the Bush family's Crawford residence, he sees it as imperative that sensitive site design and an architecture that derives some of its energy and materials on site will reveal nature's regenerative power. Although my former teacher, like half of all architects these days, would call himself "green", his rationale in applying green strategies is for me the most compelling. He may believe that a more systematic and universal application of green methods may make a noticeable difference against global warming, but he also understands that there is more emotional and aesthetic case for green design that can persuade all people regardless of their political disposition. I'll admit that he was one of my school's best teachers, commanding strong rhetorical skills often lacking among architects. Having David Heymann critique your studio project was a mixed blessing, in that he would have a lot to say about your project and would not hesitate to point out the weaknesses or your design at length. But contrary to many reviewers who'd rather fumble with pablum, everything he said was well-reasoned and insightful, peppered with humor and an awkward passion. He ended up posing difficult questions about assumptions we students would take for granted which demonstrated a convincing grasp on the latest architectural theories. I wouldn't be surprised if Heymann's personality as a dedicated teacher rubbed off on the then-governor.

Heymann grew to like the future first family, and may have actually attended Bush's first presidential inauguration. Although he shared little with the president politically, he would not refuse the opportunity of getting to know a man that reached out to his architectural creed. The lesson here is that the relationship between a client and an architect is about more than partisan affiliation. It entails fulfilling an emotional need of the client to his land and allowing the architect to achieve a solution respectful to the site's uniqueness.

This goal is close to what I think architecture should be: creating beautiful spaces that function for the client on various levels, both technically and emotionally. It's a far less defined goal than calculating points for LEED certification or specifying a building's carbon footprint, but it is fundamental towards producing timeless design that moves us in inexplicable ways.

Monday, March 05, 2007

The Price of Progress: Are We Still Willing to Pay it?

The other day in class at my very liberal seminary, I committed a cardinal sin: not only did I defend capitalism in the face of Christian theologians citing it as the world’s principle moral evil, I suggested that short-term hardships were often (get ready, here it comes!) “the price of progress.” It went down like this: a member of the class mentioned that NAFTA has “forced” a local plant to shut down and move its jobs to Mexico. I mentioned that this could be considered a good thing for Mexico, to which she responded, “But at the cost of losing indigenous jobs.” I said, “Well, that’s the price of progress.” The collective gasp in the room was surprisingly intense. The teacher struggled to regain control of the classroom, and practically had to save me from the tar-and-feather brigade. What a mistake I had made!

The incident reminded me of two facts: too many in our culture have lost the ability to grapple with complex processes, and too many have lost the willingness to sacrifice. This conversation is the perfect example: in the same breath, a woman who I generally regard as intelligent, bemoaned the ill-effects of NAFTA (a loss of “indigenous jobs”) as well as the fact that her local church didn’t do enough “advocacy” on behalf of the poor. Do you see the contradiction? If you want to “advocate”, support NAFTA! It just gave a lot of good jobs to Mexicans. She wrote off NAFTA based on a soundbyte: “We lost American jobs,” never even attempting to consider the net benefits of free trade. Second, even though she says that she wants to see the lives of the poor improve, she apparently doesn’t think it should cost her or her community anything. Others will have to pay the cost for the lives of the poor to improve: classic NIMBYism (not in my backyard).

Something else has left these sorts of discussions: the ability to be coldly objective. I have written before about the reality that justice is a cold-hearted venture. Similarly, in the world of ideas, to argue a point without emotion has always appeared to me to be an asset. Apparently it’s not. I had to spend a good couple of minutes defending the fact that I, as a pastor, wouldn’t throw parishioners to the curb if they came to the church needing help because they had just lost their job. How I might help someone personally with someone who was unemployed and what I believe objectively about free trade need not intersect; this apparent contradiction was too much for others to take. For me to imply that America and Mexico are both better off in the long run if America stops manufacturing was seen as cruel. While I was stating an objective fact – the loss of jobs is historically the “price of progress” (ask IBM’s typewriter manufacturers) – a roomful of my opponents made personal judgments about my character.

Of course, it’s not just economics. We se it in our foreign policy: we want peace in the Middle East at no cost to anyone (except Israel). We see it in liberal theology: we can’t risk saying to any moral behavior lest it portray us as hypocrites. So what happens, then, when the majority of a country will not sacrifice? Two things come to mind: it must learn to abhor progress, which demands sacrifice. And it also must despise those who promote progress, since they are the harbingers of change. This may explain why leadership in general is a more precarious place to be than before: to lead people towards progress that involves sacrifice disrupts the inertia they have grown to love. So leaders that maintain the status quo (the Clintons come to mind) are heralded while leaders that promote change (Bush) are hated. Until we can accept that there is a price to progress, I will ignore the cries of those who wonder why we never make any.

A final thought: one of the prevailing myths about capitalism is that for one person to get rich, another must get poor. This is not true. Free trade generally ensures that two parties can both be better off by letting the other do what they do best (aka, the comparative advantage). While there is always a price of progress, the beauty of the market is that it minimizes the price, and disperses it fairly.

Wednesday, February 28, 2007

Of Course the Oscars Were Boring: Leftists are Dull

Much has been made of the low-rated and generally dull Oscars which aired on Sunday night. As unused moments for humor came and went, production values were bland, and Ellen Degeneres’ segues were lifeless, it was clear that if this was the best Hollywood could put together, no wonder people have stopped going to movies. Upon a bit of reflection, however, I wondered why anyone would be surprised that the Oscars were dull? It’s hard for people who don’t truly love life and embrace its possibilities to have good senses of humor. I mean, would you honestly want to have many of the people so prominently displayed on your television screen so much as over for dinner?

If Leonardo DiCaprio came, he might systematically demonstrate all the ways you are destroying the environment with your inefficient home. Gwyneth Paltrow and Maggie Gylenhall might speak of their concern for the downtrodden, forgetting to mention the privilege and luck they experienced growing up children of Hollywood elite. Or maybe even Al Gore could stop in and share his Oscar-winning “film,” er Power Point presentation about global warming. (That is, unless a snowstorm in this frigid winter keeps him at home). If all else fails, maybe George Lucas could come over. If his conversation is as bad as his scripts, however, it might be a very dull night. (Okay, that wasn’t fair. I do like Star Wars.)

My point is that we shouldn’t be at all surprised the Oscars missed the mark; leftists are boring. Worse, they’re cynical. How could you expect a group of people who believe the earth will self-destruct in ten years to have a good time? When Al Gore is applauded as though he were a savior, you know you’re going to have problems. They are humorless, apparently unable anymore to poke fun at themselves, and they take themselves as seriously as a heart attack. They live lives dominated by fear: fear of the future (why they love raising a stink about global warming), fear of losing control (why they hate capitalism), and fear of virtue (why their movies are cynical to the core). They can’t relax long enough to be witty and self-reflective.

Ultimately, unhappiness seems like it has become synonymous with liberalism. I realize that is an over-simplification. But from my point-of-view, and from a lot of experience with die-hard leftists, modern liberalism has its roots in the negation of God, therefore the negation of the Truth. With this, what are you left with? So movies more and more relish in a postmodern ennui, inevitable disaster, and pointlessness. So few movies can affirm a love and passion for life; it seems to take children to do so, because most directors and writers don’t seem to believe there is much point in living.

Not that films have to be stupidly naïve. Far from it. Great movies of the past wrestled with the difficulties we face in life, and even the temporary feelings of hopelessness we all encounter, but didn’t always give into the seduction of cynicism. Chariots of Fire comes to mind. A cliché, perhaps, but Blade Runner, even set in such stark film noir scenery, is ultimately uplifting. Even Star Trek II is more profound than this year’s sci-fi Children of Men. And I love realistic movies. I was glad Scorcese and The Departed won. I was a huge fan. But realism and cynicism need not be the same thing, and Hollywood doesn’t know how to make the distinction quite yet.

I tried to watch as little of the show as I could, but I do love movies, and their production is rather fascinating. Knowing how difficult it is to get one photograph just right, to make a movie, much less a great one, must border on the impossible. Watching all of the nominees for costume design, art direction and cinematography reminded me how much skill and time goes into a film; acting, consequentially is probably the least impressive feat. (Apparently, even American Idol contestants can become as good as any Julliard-trained thespian overnight. Who knew?) But a lot was indeed missing on Oscar night, and it could have all been predicted if you know the heart of a leftist.

Thursday, February 22, 2007

Think Green, but Don't Think You Can Save the Planet

It seems that with every passing week, the call for an architectural response to global warming seems to be getting louder and louder. Demands for more sustainable design and technologies have been given high priority in our education and professional literature already for the past couple of decades. Even architectural licensing exams, which test for a candidate's knowledge in construction and legal liabilities affecting the profession are increasingly requiring extensive knowledge of green building technologies and strategies. Licensed architects are assumed to be able to take responsibility of health safety and welfare, in which green design seems to fall in the fuzziest category of welfare.

It's good to constantly educate oneself in new materials and methods, particular if they lead to greater efficiencies. If so-called ecologically friendly products and techniques also enhance a building's beauty or lyrical power, that's even better. But if a design is considered great strictly because it implements green design, then we should re-evaluate how architecture should be measured.

The preponderance of environmentalism in the architectural profession cannot be underestimated. Not since Modernism has there been such an all-ecompassing systematic and orthodox body of assumptions that have influenced what we are allowed to think and what we are allowed to express. Modernism made forceful assumptions about the reality of the new industrial age and generated a systematic philosophy of design which organically spread to all the academies and professional groups in a span of two decades. Environmentalism has followed a similar path since the Seventies, and now it stands to provide a universal measure on the merits of any architectural design. This is a major development, since ever since the ascension of postmodernism in the late sixties there was any such thing as good design applied to a universal standard. Since standards were mostly arbitrary constructions, as a postmodernist would declare, it becomes pointless to ever seek an ideal of any kind. Green design brings back a universal standard, and ideals have been clearly articulated by countless environmentally aware citizens thoughout the world.

Contrary to what many environmentally sensitive designers might think, Green design has not ushered a revolution as immense as that brought forth by the early Modernists. Gropius, Le Corbusier, Mies and even Frank Lloyd Wright contributed to a radical reimagining of space, techtonic relationships, materials and the relationship between inside and outside. Green design has mostly consisted of technological innovations in a building's mechanical systems. Photovolatics, automated lighting control and grey water collection systems are ways to maximize efficiency. Passive strategies such as daylighting, using natural convection and thermal mass along with evaporative cooling are not new to green design, but have been used in building since the beginning of human history. The biggest contribution of Green design has been to complement age-old sensible climate-influenced archetypes with new materials and technologies to minimize energy and water use, while maintaining a level of comfort we have grown accustomed to.

So far I've hesitated so far to call green design 'sustainable'. By consuming less resources to leave for future generations a building becomes 'sustainable'. However, a modern building can never be self-sustaining as it will almost always rely on distant power source and a municipal potable water system to maintain a high standard of living. Lots of resources and energy is used in manufacturing green products, either by recycling or transforming a renewable resource into a finished product. Over the lifespan of the building energy and resources will be required to maintain the building, even if at a lesser rate. True sustainability would more likely be found in an old European stone palace, since it only consumed resources to construct it but required no further resources due to the lack of modern plumbing and electricity. They don't necessarily maintained and some do quite well after a few hundred years of neglect. Masonry palaces are sustainable but few would ever want to live in one without modern mechanical systems.

Since green design is more of a mechanical issue than one of forms, many of the buildings highlighted by green publications as outstanding examples have unexceptional appearances. What often sees is an average house or office building, with the adjoining article listing the various green strategies and technologies in place, along with a few diagrams on how some of the systems are supposed to to work. It's all in the details, and architects will study these case studies not as a conceptual jumping points, but as alternatives to be considered when compiling specifications. This is a thick volume that contains all the materials and systems to be used on a project, sort of an architect's shopping and to-do list for the contractor to follow. Compiling specifications is not that fun, but architects can feel at peace with themselves knowing they chose the 'right' things. I predict that in the next decade, almost everything we specify will have environmentally friendly qualities, and there will be little thought in selecting them. But I don't count on this eventual occurence to prevent the earth from warming or cooling.


It's not a matter of ignorance on my part that I'm skeptical. I, among countless others, have undergone tireless education about ecology, biodiversity, and environmentalist perspectives since as early as I can remember including schooling and scouting where I even earned the appropriate merit badges. I am even guilty to having enrolled in environmental science courses in college in order to fulfill my college’s requirement for a science course that wouldn’t be too hard. I am a LEED certified professional, in which I am capable of leading the client, designers, and engineers in earning points for LEED certification as defined by the US Green Building Council. LEED (Leadership in Energy and Environmental Design) is the most accepted standard in the United States in determining what constitutes a green design. But all this knowledge about environmental issues does not convince me that it should take precedence over other competing needs. I don’t buy the “litany” described in detail by the author Bjorn Lomborg, which is the package of alarmist assumptions about the environment and its complementing ideology about how to resolve problems on earth.

One of the reasons I can’t get too motivated in ‘saving the planet’ has to do with my respect for the values of free inquiry and tempered analysis. These values seem to exist less and less in my professional and academic circles, as it seems that every person or firm wants to outdo the other in achieving a higher level of purity in environmental consciousness. Just a few days ago the American Institute of Architects sponsored a “Global Emergency Teach-In”, and has invited such environmental movement firebrand luminaries like Robert Kennedy Jr. and Al Gore as keynote speakers at its national conferences. The fact that buildings consume 40% of our country’s energy instills a great sense of guilt among architects, and therefore a LEED certified building becomes a form of repentance.

To me, there are more important things architecture should do than prevent that possibility that our world could become a couple of degrees warmer in 100 years. If we can’t predict what will happen in the weather for the next twenty-four hours then it is extremely arrogant of us to be certain what our climate will be like in a century. We should have the humility in appreciating the complexity and unpredictability that is our planet, and that we are a tiny spec compared with much larger forces that affect life on earth. For us to believe that the climate can be tamed if we humans simply exerted our will is pretending to possess an infinite power that we do not have. The call to stop warming presumes that there is an actual ideal climatic norm that we should preserve, forgetting that humans have been adapting to inexplicable swings in the climate for millennia.

We should instead make our buildings more adaptable to however our climate will change. It is sensible to design a building to benefit from micro-climatic features on its site. We should always strive to be good stewards of our land, by lessening as much natural impact as possible, wasting as little as possible, while optimizing the comfort of the resident. Doing so will strengthen the interdependence of the building and its site, which often leads to an aesthetic harmony. The ancient Vitruvian ideal that architecture should embody harmony and beauty, utility, and stability is still most relevant and transcends the virtues of environmental bean-counting (eg.-how much one saves on the electrical/water/gas bill). We should embrace new technologies that deliver on promised performance, and judiciously examine cost. Upfront costs of green technologies are often higher than conventional methods, which, like good Modernist design, are more easily afforded by the rich than the poor. Lifecycle cost analysis won’t convince a single-mom barely scraping by to convert her small house to green design. I’ve visited countless mini-mansions exhibiting green design, as well as office buildings that belong exclusively to the company (rather than renting space). What they can accomplish on the efficiency front is impressive, but I’m doubtful any of them will make much of a meaningful impact on the predicted seven-inch rise in sea levels as predicted by the U.N. Many of these examples of green architecture ignore the harmony/beauty aspect of the Vitruvian equation, and despite improvements still fail at providing adequate utility.

If one studies the evidence, we as a modern society have made great strides increasing efficiency. We have also achieved cleaner air, water, and have enhanced our health. I’ve noticed far fewer stories about real pollution nowadays than what I recall growing up over twenty years ago. Even the supposed ‘crisis’ in landfill space has lost its prominence, since it really isn’t much of a problem. Having improved on all these seemingly intractable matters in the past decades has ironically led not to greater confidence in our ability to help our environment, but to some short-sighted panic about climatic phenomena we are far from understanding well. The hysteria surrounding global warming attests to our lack of perspective of what is really important: making life better for all human kind through prosperity and the technological advances that come with it. The best weapon against unpredictable forces and events is strong infrastructure that wealth can provide. Wealth makes us and what we build more adaptable which is our best and most sensible strategy to our ever-changing environment.

Tuesday, January 30, 2007

Growing Into Anonymity

As the resident non-architect to blog on a site about architecture, I thought I owed it to myself to read The Fountainhead, Ayn Rand’s description of the ideal man, uncompromising architect Howard Roark. While I am not yet finished, I can say I’ve had a different reaction to this book than Atlas Shrugged, mainly in that I see more clearly the deficiencies of Ayn Rand’s philosophy and writing. The book has also caused me to reflect on what it is to grow older and grow into a profession, all the while contrasting what I have become with what I hoped to become in my younger years. As I age, I understand better that while the world was once my oyster, I am now becoming just one of many. Instead of growing into fame, power, or notoriety, I, and most of my generation, are growing into anonymity. I wonder, is that a good thing or a bad thing?

One of the facts that contrasts this growth into anonymity is the celebrity culture that seems to have no bounds. The notion of 15-minutes of fame seems to be a reality with blogs, YouTube, reality television, all of which tempt us to think we may be the next great contributor to our profession, pop culture, or society at large. In fact, what has happened with the proliferation of media is that anyone’s odds of standing out for longer than a few minutes in anyone else’s conscious have dropped considerably. While there are more opportunities to escape anonymity than ever, actually escaping it requires besting scores more competition. So it seems we’re stuck, anonymous and wondering when our time will come. And to some, this anonymity can be a crushing weight of perceived meaninglessness, and leave them wondering what their purpose was. The disillusionment of anonymity is one of the reasons for the wild success of The Purpose Driven Life.

As it turns out, the Christian take on all this is that anonymity is not necessarily such a bad thing. In fact, it’s practically a virtue. Over and over Jesus instructs his followers that becoming the least is our path to exaltation, and denying ourselves is preferred to worshiping ourselves. (Ayn Rand obviously has a different take, advocating man worship in no uncertain terms.) Other secularists would claim that this is the way religion controls people, by convincing them to become nothing so that someone else may control them. While this is undoubtedly true for many cults and dangerous sects of all religions, I don’t think this is at the heart of Christianity. From the Christian viewpoint, growing into anonymity simply puts us where we need to be: as creations of God worshiping our creator.

This is not an easy thing to accept in a culture that rewards those who stand out, even for Christians. Unlike the Soviet Union of Rand’s youth, America is home to constant temptation to shirk anonymity, to “make a name for yourself,” to become something special. Even in niche fields where the opportunities for fame are miniscule, I wouldn’t be surprised to learn that within everyone, there is a desire to stand out, even if only by having the widest read blog in your field. But these wonderful motivators in youth can be difficult to lose in adulthood. How does one deal with the fact that he/she will not be “the great man,” or that he will not achieve fame?

It is here I would have to come back to faith. While the meek certainly inherit the earth, Christianity absolutely does not (as socialism does) lump all humans together as cogs in a wheel. In fact, Jesus’ healing ministry to those in the most need (Mark 5 about the demoniac is one of my favorite stories) suggests that God cares very deeply about humanity on a personal and individual basis, assuming you believe that Jesus was divine. God’s words to Jeremiah are often used in the pro-life debate, but I use them here to suggest the way we may remember we are more than one of many; we are one full person in the eyes of God, and it is from this place we may find our full identity: “Before I formed you in the womb I knew you, and before you were born I consecrated you.”

My understanding of the mid-life crisis is that middle age allows man to reflect on his place in the world, wanting to know if he has made a mark. Ayn Rand can project her ideal man easily enough in a work of fiction; for the rest of us, we have to be content knowing that yes, our anonymity is a difficult thing to accept. But if we accept that our anonymity on earth is actually not a reflection of the way God knows us, perhaps it is a little easier to take. This is not a panacea; it is a reality of the spiritual life.

Saturday, January 27, 2007

The Truth About 15-Minute Meals: They Stink

I realize this may seem to be a bit of a departure from A&M’s usual fare, pardon the pun. We usually prefer to dabble in architecture (obviously), politics, religion and culture at large. Food, however, is something we all have in common, and the “Food Movement” in the culture, be it the “quick meal” phenom, or the rise in restaurants everywhere, that bears brief comment on. From the popularity of food television programming to the plethora of quick meal advice and recipes, there is no shortage of help for the good and bad in, or new to, the kitchen. But something has been lost in this movement to appeal to busy soccer moms and domestic dads: what makes good food good.

Cookbooks are, of course, nothing new. Even my grandparents, cookers of consistently excellent food for years, owned several and referenced them often. Any great cook steals from great cookbooks of years gone by. The trend now, however, is towards quick meals, be they 15- or 30-minute meals. (Someone recently gave me an old Julia Childs cookbook, and some of the recipes are 5 pages long. That book probably wouldn’t even get published today – too long, too complicated, and not user-friendly enough.) Besides the fact that these minute amounts are not entirely accurate when you consider everything that goes into cooking a meal besides cook time, can really good food be cooked in such a short amount of time by the lay chef? What most of the meals seem to do is take your basic meat, starch, vegetable combination, add a few spices, and call it a meal.

Certainly, most of the time, that’s a fine meal. But what makes good food good is time. Gone are the days where busy Americans have hours to watch a stew, or spend time with 10 or more ingredients. What used to be prized, slow-cooked food that allowed ingredients to marry, is gone in favor of what can be grilled, blanched or microwaved in 10-minutes or less and still be edible. And, there are is also the reality of duel-income families, so there is admittedly less household time to cook. When time in the house got short, an emphasis on good food was one of the first casualties. Hence, the rise in two industries: the restaurant business, and the “quick meal” section of the cookbook aisle.

Perhaps I’m nostalgic, but to me, we really lose something when we don’t bother spending time with meals. First, we lose the appreciation of different flavors converging, as they might in a stew, gumbo (I am from Louisiana, after all), or even comfort food like chili or enchiladas. Only when food is allowed to take time do the natural wonders seep out. Second, there is a lot of conversation lost. It seems that if meals have to be made quickly, then they have to be eaten quickly as well. Thus, the common family time, the meal, is limited. Food that takes time allows for conversation, most of it probably frivolous, but important nonetheless. Children hovering around a pot waiting for the soup to finish, or (heaven forbid) helping to wash fresh lettuce is a unique way families can relate to one another. And with food we are willing to take our time with, we get the benefit of using real ingredients. For example, if you look at the two main ingredients in a store-bought Italian salad dressing, you will see that they are not olive oil and vinegar, as you might have guessed. They are water and corn syrup. Is that what we really want to eat?

This is not, by the way, a critique of “fast food,” the McDonald’s of the world, or even quick meals. Because of the pace of life, we all need 15-minute meals from time to time. But I am an apologist for good food, no matter the cost or the time. We lose something when we refuse to spend time with food. Aesthetically, we lose a reminder of natural beauty when our senses aren’t involved with food. Culturally, we lose the communal aspect of both cooking and eating together. I guess you could say I’m not someone who eats to live. I live to eat. Yum.

Saturday, January 20, 2007

Traffic Jams: A Sign of a Prosperous and Free Society?

Thinking about corbrusier’s recent post, I have been considering again, and trying not to take it for granted, what it means to live in a free society. I certainly agree that America’s prestige and affluence have made it all that much harder for us to understand what it is to live under anything from our historically ideal conditions. Indeed, one could very well make the case that we are so unable to imagine what our lives could be like if we shared such a tyrannical government, that we have become completely spoiled. So spoiled, in fact, that as a nation, we may end up rotting from the inside out well before an outside force overtakes us. (Or perhaps the two will work together.) But whatever America’s moral flaws may be, I’ll save that for another post. I am more interested in trying to respect (literally, to see again) what a society that values life and the rule of law looks like, in opposition to a totalitarian society that we find hard to imagine.

So naturally, as I was listening to Chicago’s traffic reports yesterday morning, it struck me that traffic jams may be a wonderful barometer of a nation’s health. For example, this morning, the outbound Kennedy was jam-packed because a traffic accident, and its ensuing police investigation, blocked traffic to all but one lane. No doubt, all the rubber-necking slowed it down even more. I instantly had the same reaction commuters have to such news: because of one bad driver, thousands of others will needlessly be stuck in rush hour traffic for a very long time. Ah, the injustice of it all! But further reflection got me to thinking that maybe this was, in fact, a perfect representation of justice, of respect for human life, of putting rule of law that protects all people equally into action.

What does it say about a society that is willing to displace thousands of commuters for the health, either physical or legal, of one? From that point of view, traffic jams, though clearly inconvenient, say a lot. And most of it is good. Not only is it often crucial for medical personnel to get to a scene quickly for the physical well-being of the people involved, but for the legal rights of those involved in the accident, it is critical that an accurate police report be written. While it is true many frivolous lawsuits come from auto accidents, the legal claims any person can make following an accident are only secured if a verifiable entity (such as the police) accurately document the event, and then an impartial court presides over the case. In other words, it is more important that the potential health and legal rights of one person are preserved than the morning commute for hundreds of others.

While we not only take this for granted, but in fact complain a great deal about them, traffic jams are often (but not always) emblematic of what a free and moral society looks like. Can we say this for every nation? Surely not. It is hard to imagine that in every nation citizens have the confidence that their legal rights will be protected if they are of a low social standing, for example. And not every government protects the value of life to a point where it is willing to cause such congestion on the highways. Unlike so many nations where it is seemingly impossible for people of different ethnic, religious or racial backgrounds to protect one another, any person who needs health care, either on the highway or in the Emergence Room, gets it in America. The reason the story of the Good Samaritan is so popular is because Jesus knew how rare it was for a Samaritan to offer aid to an Israelite, and vice versa. The Samaritan, in effect, would have caused a traffic jam if he were first in line, not third. (The rabbi and the Levite’s morning commute would have been greatly hampered!)

Of course, there are many reasons for traffic jams, most of them good from a certain point of view: more people are driving, meaning more people can afford cars. Roads are being repaired or improved, in and of itself a pretty good thing, even if in Chicago large amounts of graft usually are involved in such construction. Or, there has been an accident, and the police are diligently investigating the scene to protect the health and legal rights of all involved. Come to think of it, there may not be a better representation of the greatness of America. Instead of bemoaning traffic jams, perhaps we should build statues celebrating them, or immortalize them in art showing an ambulance crew helping the victim of an accident while thousands of morning drives are delayed.

A free society is not always convenient. More importantly, it protects the rights of one person over the conveniences of hundreds of others. This is the sacrifice not-yet-free nations will have to learn to make, and somehow, live with.

Saturday, January 06, 2007

A Failure of Imagination: Underestimating the Influence of Saddam's Totalitarianism

The recent deaths of Saddam Hussein, Jeanne Kirkpatrick, Augusto Pinochet, the poisoned Russian spy along with the current debate on the extent of American intervention in Iraq have brought certain truths. The first being that political totalitarianism is awful on every account. This realization may seem obvious to anyone, but in monitoring current media chatter and listening to the reactions of everyday people to newsworthy events elsewhere in the world, it is apparent that there has been little effort to articulate in any detail the affect totalitarianism has on a people and how it leads to such widespread antisocial pathologies in countries that succumbed to totalitarian rule.

In our comfortably prosperous and civil environs that is middle-class life in America, such extremely depraved acts as assassination by radioactive intoxication, state-enforced execution with a taunting chorus recorded for posterity by camera phone and the tactics of militant forces that thrive in radically undermining any separation between armed forces and civilians appear incomprehensible to most of us. From our perspective of a society that takes for granted the prevailing rule of law and basks in limitless freedom to express oneself and make as much money as we endeavour, there is little difference between one dictator responsible for killing a few thousand of his countrymen for the sake of permanent political stability, rule of law and prosperity for his country and another totalitarian ruler who exterminated several hundred thousands of his own subjects for the sake of sowing permanent tribal division, economic degradation and political anarchy. Jeanne Kirkpatrick pointed to this difference, and urged that American foreign policy acknowledge it. But due to the extraordinary circumstance of growing up in a country where one hasn't experienced any degree of authoritarian rule for more than 230 years, many Americans are naive to the fact that most of the populations of the world are subject regimes that are far more invasive and deterministic in one's individual life than we can imagine.

If one looks at areas of the world where devastation by civil war, terrorism, or subjugation my Mafia clans or medieval tribalism, one can explain much of it through failed totalitarian experiments. In Russia, totalitarianism for seven decades begot a stunted and increasingly one party "democracy" that quashes opposition by whatever creative means, and in which the mafia is a major factor in daily life. In Cambodia, the Kmehr Rouge has achieved a sort national lobotomy that has prevented the country from being self-governing and deficient of and educated class to follow Vietnam's path to economic self-sufficiency. In numerous African countries, short-lived attempts at establishing socialist utopias have often lead to a radicalization of tribal animosities, to the extent that genocide and hacking off people's limbs are par for the course. In Afghanistan, the Islamic totalitarianism of the Taliban only succeeded in incarcerating women completely, as well to remind Afghans why it's more pragmatic for them to bow to those threaten with unrestrained violence over a democratic yet defenseless government of Karzai. In the aftermath of Castro's eventual passing, I don't expect Cubans to suddenly transform itself into a capitalistic democracy any time soon.

The largest totalitarian experiment in the Soviet Union has much to teach us about the system's adverse social effects. Although Russia under the Czar denied any modicum of personal liberty to its vast peasant class at the hands of the aristocrats, Stalin made sure freedom was only practiced by him alone. He made sure that everyone agreed to the Soviet state's omnipotence and ruthlessly ridding those who did not. What many people outside the USSR and other similar totalitarian systems is the extant to which it effects permanent changes that damage the psyche of subjected citizens that take several generations to eliminate. Paranoia becomes a tool of self preservation, as the social trust we take for granted in the U.S. is obliterated by a constant policy of purging suspected political enemies. People learn to be two-faced, carefully expressing one thing in public and voicing something contrary privately, which results individuals being skeptical about an other's true intentions. Such uncertainty erases all sense of trust, and essential ingredient in the building of any enterprise comprising of strangers that must cooperate. Nepotism becomes the rule in who gets to do what where, and competition between families rules out the neutral arbiters. Financing a business becomes a family affair, and for those families who do not have the means to pay for anything are beholden to underground mafias, who serve as surrogates to states who only serve the elites that control them. Where public trust is gone, one is evaluated no longer as individual with an independent conscience but rather one is identified as a member of a clan devoid of any real individuality. These observations are far from new, and have been better explained by Francis Fukuyama's book on the value trust from one society to the next.

What political totalitarianism does, then, is to eliminate individuality in all forms, particularly in one's thoughts. Once the totalitarian regime expires, it is not a given that such individual self-consciousness automatically returns. When an overarching identity defined by membership in a state is repealed, what replaces it are the layers of identity tied to simpler social structures: ethnicity, kin and family. To repeal those layers of identity to the individual is impossible in a social environment where one does not treat other individuals and humans with dignity, independent thought and sincerity. A free-society run by trustingly civil individuals is a huge leap in the history of social evolution. Totalitarianism does not derive from people deciding withdraw their individuality to become part of a larger political identity. Instead totalitarianism draws off pre-existing loyalties to ethnicity and kin to forge a far-ranging loyalty to the state. One one hears the common argument of how can a people maintain a free society when they have never know freedom, it's another way of stating that individuals in that that society haven't learned to trust each other. Contrary to the Marxist proposition that the final end of social evolution is the formation of a classless society based on economic equality (which to in my opinion is really about condensing several classes into one, which then becomes a 'state') I believe that totalitarian-enforced socialism is nothing more than a continuation of long tradition of abolishing individualism in favor of groups, from the family, to clan, to political party. Thus, when totalitarianism fades, the family, the clan and a the political party once again take precedence. In this context individualism never came to being in the first place.

From this perspective, much of what is happening in our world becomes comprehensible. In Iraq, Saddam for many decades was able to completely destroy any faint democratic notion remaining among his people, he used his ethnicity and his Tikriti-based kin to submit all other ethnic groups and clans to his and his party's rule. Individual identity was secondary to what ethnicity and clan one belonged to, and whether someone was a Baathist or not, or simply whether he did not like someone not as dignified individuals, but what threat they posed to him personally or how they could enhance his power. For this to happen, hundreds of thousands of people who did not belong to his group were slaughtered for none other than who they were. Kurds, Shia, Marsh Arabs, or even his kin who consorted with people outside his family were all people who betrayed Saddam's narrowly defined totalitarian identity. With Saddam gone, to where did the power devolve? To the next level of social structures: the clans, the religious leaders and religiously affiliated political parties. Demographic isolation is currently taking place in Iraq, with each of the three major ethnic groups claiming their own territory, with multi-ethnic cities changing into singular bastions of one ethnicity or becoming the home of a powerful clan. Such movements of people have made American objectives for a stable federal democracy in Iraq a tall order. Although such a goal is admirable and any country with an existing political culture amenable to democracy would jump at the chance America has given Iraq, the totalitarianism instituted by Saddam and his Baath party has diminished the ability for the Iraqi people to understand and embrace the golden opportunity before them. The post-Saddam era for many Iraqis seems to have presented an opportunity to rather exact revenge on the former ruling group.

When I read about the current difficulties facing American-led coalition forces in Iraq and the horrors afflicting Iraqi civilians from terrorism, I tend to see these events from a different perspective than what drives the contemporary debate on Iraq. In my mind it is easier to indict American military strategy, Iraqi political fractiousness, or the intelligence of George W. Bush than to confront the totalitarian reality that was Iraq under Saddam. This reality is not only beyond our comprehension, it is beyond our imagination. Totalitarianism brings out the worst in human nature and uses it as an instrument to wield control. More gruesome and more lethal forms of torture were developed under totalitarian regimes, systematic executions and show trials were means for the authority to communicate power and induce fear in the population. Islamic fascism takes the totalitarian ethic further by injecting fanatical religious conviction into the mix, thus inflicting cruelty and mercilessness not only to members within a political polity, but outside it as well. In addition to executing on the spot any villager who does not demonstrate loyalty to the controlling Islamic fascist, they will capture torture and summarily execute foreign soldiers and humanitarian workers with little regard to international law or any modicum of human dignity. Whenever one of those beheading videos are broadcasted, or reports of terrorist militants using schools and mosques to stockpile weapons and coordinate plots are written about, or blatant footage showing terrorists using civilians as human shields, I can’t help but think about the many years of totalitarian brutality in its many various forms that have led to such a sordid outcome.

But what I also cannot comprehend is the reaction of so many people to such blatant evil. Such resignation, such complaisance, such futility in trying to explain it away as a just reaction to what the Americans did beforehand. That is not how one should respond to this subhuman depravity. One should instead support efforts to eliminate it outright by force of arms. Any other alternative such as talking it out or making concessions will not stop totalitarian violence. Such misguided efforts through accommodation helps explain why many totalitarian regimes last for long periods of time long after the political subjects have tired of it.

One of the conclusions reached in investigating the 9/11 attacks was that there was a failure of imagination in preventing such an unconventional terrorist plot. Much of what made 9/11 such a traumatic event to the American psyche was its surreal quality, a sense that these attacks could not have taken place in reality, but rather somewhere in our imaginations, as if in a dream. I fear that much of the world has failed to imagine what the Iraqi people have been subjected to under Saddam which not only has affected American efforts to stabilize Iraq but also how most of us have no reasonable answer to a powerful evil staring right at us. If we cannot take a firm stand against totalitarianism and its evils in front of our faces, whether on the battlefield or on our television screens, how can we expect to protect our own freedom from it?

Thursday, January 04, 2007

Nanotechnology and Bread: How Technology May Change Religion

Listening to a podcast the other day spurred me to ask what the future holds for religion in a quickly-changing world. Futurist Raymond Kurzweil, who has a history of accurate predictions regarding artificial intelligence, sees robots becoming as intelligent as humans by the year 2029. Kurzweil, an incredibly prolific inventor and writer, holds that because technological change is exponential rather than linear, the world will see drastic technological breakthroughs in the coming decades that will increase lifespan, spawn robots capable of feeling, and have nanotechnology swimming in our bloodstream. All of this will eventually lead to a singularity, or the time at which humans cannot keep up with the advancements in technology. Imagine the replicants in Blade Runner, and you get a pretty good idea.

Which naturally made me wonder what the role of religion will be in such a new world. The vision in Blade Runner or Minority Report seems to be largely if not completely secular. People of faith, such as the "mystics" Ayn Rand portends in Atlas Shrugs, seem to be apocalyptic lunatics, caricatures who have only ever read the book of Revelation. But I don't think that will be the case in full. With every change comes reactions, and certainly the popularity of the dreadful Left Behind series is proof that changes in the world (even mere millennia) spark absurd reactions of fear. But ultimately, the apocalyptic fear monger is the exterior vision of the way technology may drive people to real faith. Technology, like a good photograph, is always full of promise and potential, but rarely cashes in. Technology, for many, brings with it a great amount of hope that our lives will be improved, diseases cured. Some dream of a techno-utopia.

But technology will be just as flawed as any other human effort. With every benefit comes a trade-off. You get to live longer? Great, but don't expect to be able to retire before you're 80 to pay for it. No more cancer? A plague is probably just around the corner; they always are, and they're always ahead of our medicine. More information at your fingertips? It's at everyone else's too, so the world just got infinitely more competitive. When technology fails, we will either continue to grasp for new hopes, or will revert to that which is real hope. That is not to say technology is a bad thing, or that I fear it. It's coming whether I like it or not. But its possibilities are limited, not infinite, and it definitely won't allow us to live forever.

What has the impact of technology already been on religion? I can't help but think that there is some link between the rise in fundamentalism over the last 20 years with the rise of technology. For example, the Internet has both allowed jihadists to propagate their message, but it may have also struck a great sense of fear among them. It was one thing when the West was relegated to nations thousands of miles away. But the Internet has erased borders and boundaries. The Internet has allowed the West to "infest" every corner of the world with its ideologies, commerce, and (gasp!) religion. It has gotten harder and harder to remain isolated, so it makes perfect sense that some who wish to remain so are fighting back.

We should remind ourselves that while America creates and devours knowledge at unprecedented rates, there is a real fear of information among many about what changes may come. We should have known that the information revolution would spark just that: a revolution. Surely we should have been able to guess that not everyone is in favor of knowledge. Some groups want to stay in the dark, and they desperately want others to stay there with them, at any cost.

So will it only get worse? Certainly the current wars will have to play themselves out. If democracy is possible in the Middle East, eventually the people there will adapt to technology, though it will take a very long time. I am optimistic that they will. But at a slower and steadier rate, I can't help but think that massive changes in the world around us will always draw people to faith. I'm not crazy about that evangelism strategy, mind you. But when I parallel the imagery of robots curing disease in our body, with say communion bread and wine, the older medicine seems like it may be a bit more appealing.

There is an aesthetic reason for this as well. It seems like the world of technological advancement is a sterile one, one where human interaction is limited. The blogosphere is even heralded by some as a replacement community to local communities. I don't see it. Technology enhances are life, it doesn't change the needs of people. We will always crave real relationships, real truth, and religion offers this in ways technology can't. I'll certainly be interested to see the way technology, not just the Internet and vaccines, but serious new technologies change the role of religion in society.

Friday, December 29, 2006

Christmas and Birthdays: Welfare for the Rest of Us


I remember it very well. On Christmas morning when I was about 11, I opened the front door to my mother's house, looked at the assembled gifts underneath a modest tree, and asked, "That's all?" Little did I know that an abundance of toys had been replaced with a refinement of them. My older brothers made it clear that wasn't the right response, and I look back on it now with embarrassment. With that statement came the realization in hindsight that the dependent culture is no longer relegated to professional moochers. With every Christmas, birthday, anniversary, Valentine's Day, etc., it seems the impetus to be charitable has gone from being a "nice thought" to being a mandate. Is there any one of us who does not expect gifts to mark these occasions? Or more to the point, do not all of us feel compelled to buy more elaborate gifts every year? How have occasions for celebrations turned into such times of material expectation, and is there any way to curtail such expectation?

The real problem with expecting a bounty at every Christmas or birthday is that the joy of receiving such gifts inevitably dulls with time. I especially notice it in children who grow tired of new toys within a span of days, as their appetites for consumption grow more and more insatiable. The build-up for the next round of gifts decreases, as the getting of gifts is assumed. There isn't much wonder in what gifts will be given. With gift cards, there is even less creativity required of the giver. So holidays and birthdays become more and more akin to waiting in the welfare line: even though we haven't done much to earn these gifts, we expect them with all the self-righteous vigilance of a hard laborer awaiting his day's pay.

Memories of my early childhood (before I had become so spoiled) certainly include toys. But with mismatched G.I. Joe's, Transformers, and even homemade Dukes of Hazard cars, I could create situations and entire war zones that would entertain me for hours. I was still bored at times, so playing a sport outside would be another good alternative. Boredom, as it turns out, is a crucial impetus for creation. We never had cable or a gaming system (I had to wait until my twenties for a gaming system, and I still don't have cable), so I was forced to use my imagination. Like most boys, I appreciated war scenes the most. And when I was bored with that, I learned an instrument, went to a friend's house, or read.

Creators and inventors are people who are too bored with the way things are, and I would guess that they were compelled early on not to ice their creative spirit. With a delusion of material possessions, however, it has been relatively easy to create a generation of distracted children who require more and more distraction. It is true that a delusion of toys can still encourage creativity; for example, new Lego robots require builders to amass specialized parts and write code to operate them. But to really make something from scratch, or to really imagine something beyond what is in front of you requires patience and probably boredom. On the plus side, perhaps it will make the next generation of inventors that much more clever and ahead of the curve.

But isn't losing the need to create is the great tragedy of the welfare mentality? It discourages innovation and creativity because it simply doesn't require it. Why bother creating when you're paid to do nothing? I wish conservative politicians would keep speaking of the "soft bigotry of low expectations", because it very accurately describes the welfare state. But the same is true in the family. If children are constantly expected to create no real entertainment for themselves, that's another way of saying you don't believe they can. Worse, though, is the establishment of an attitude geared toward expectation, rather than appreciation. While many Christians complain that Christmas has become too secular, they are often equally guilty of basting their children in toys of distraction. With the marginalization of Christmas, I hope Christians will take the time to explain why we give gifts to begin with, as a response to the gift which was first given us.

Children aren't the only ones with this attitude of expectation; it's often no better with us as adults, unless we consciously train ourselves to resist such temptations. And our toys aren't cheap. A natural maturation will hopefully teach us to eschew material gain, as our concerns grow more selfless. But that spoiled child within us never leaves, and it is harder to quench an appetite than to never develop one. So why do we so restlessly convince our children that every minor celebration, or even major one requires gifts? From a theological point of view, gifts should be Gospel, not Law. They are something unearned, and hopefully appreciated. With every holiday season, though, I feel more and more as though they're law, a requirement devoid of much joy.

Friday, December 22, 2006

Whither the Cardboard Model?

For those fortunate to have attended college on a campus with an architecture school, visitors to the architecture buildings are instantly captivated by the small architectural models built by students. These little cardboard or bass wood miniatures that manifest spatial ideas in physical reality is often the most accessible way for non-architecture laymen to understand a design concept. At the same time, the ubiquitous display of models throughout an architecture school give an impression to laymen that architecture lacks a level of seriousness, since spending time doing arts and crafts is commonly understood as a part of child's play. It gives the impression that architecture students must be having fun while in reality there is nothing more tiring and frustrating than gluing small pieces together with such concentration and precision.

Along with the challenges posed by building the model, there is much expense involved as well. It is easy t0 squander hundreds of dollars for a reasonably detailed model for a school project. For some students, modelmaking is a serious craft, the goal being not to solve three dimensional problems toward improving a project's design, but rather to execute a model with great precision, detail and realism. Lacking fine motor skills, patience, and being naturally stingy, I made models as seldomly as possible, and made them intentionally rough as a tool for spatial exploration and problem-solving. I intended them to be abstract, so that concepts could be more powerfully presented, while also thinking that too much detail winded up giving a model a dollhouse-like character.

The major reason I tried to avoid building too many physical models was the amount of time it consumed. There's no way of rushing a model. One can only cut a straight line with an exacto only so fast and glue can only harden so quickly. And still , there was no way of testing various alternatives without building three separate models. The finished results might be intriguing for onlookers and will better help those who want to discuss the project with the student. But they were a chore to complete, and I knew that model-building skills were among the least important in the expansive practice of architecture.

Therefore, as soon as I enrolled in studio classes that did not require physical models for presentations, I switched to using computers to create virtual three-dimensional models. In addition to ensuring high-quality line drawings of unparalleled accuracy, computer aided design (CAD) could build models very quickly and with infinite precision and verisimilitude. The best part was that it spared me from making trips to the nearby crafts store, thus saving me substantial money. It also helped that computer skills have become essential to getting any entry level job in architecture. As an additional benefit, becoming proficient in computer-aided design served as a gateway to understanding computer animation, graphic design and artistic rendering.

With all these advantages, there was yet a significant drawback to computer models: a stranger could not take digital model and view it from any angle in the physical realm. The closest means of doing this on the computer is for that stranger to muddle through unfamiliar software and navigate in virtual space. Since the model could not come to them, the next best thing was to print on paper as many points of view as possible. A digital presentation often consisted of dozens of different views to help convey the spatial idea. This often made such presentations disorienting to those unfamiliar with the project. The physical cardboard model, by contrast, was a lot more straightforward in that it could be viewed from every angle simultaneously by moving one's eye.

Since finishing architecture school, I've rarely had to dedicate any effort into building physical models. Clients will almost never pay for study models, setting aside money only for presentation models useful in selling the project to potential tenants and investors. Since the latter type requires tremendous amounts of time and high levels of workmanship, they are usually outsourced to an independent model making workshop (which are increasingly found in places like China and India). Architect's fees are inherently tight thus rewarding an efficient use of time and manpower that are antithetical to model making. Computer models suffer from an inaccessible interface for people unfamiliar with the software, making older architects unable to fully appreciate the merits of a design. Until this problem is resolved, whether by rapid prototyping or by creating a virtual reality environment that anyone can physically participate in, non-reimbursed cardboard study models will be used from time to time.

As I've taken on more responsibilities on the design end in my professional projects, I've maintained my aversion towards physical models. Luckily, computer modeling software in the last few years have made significant leaps towards intuitiveness and user-friendliness. They are easier to learn, have more efficient rendering engines, and yield quick results. I particular enjoy using Sketchup to investigate massing, shadows, perspectives, and quick fly-through animations. Although it tries to bridge the art of sketching with modeling, Sketchup is deceptively simple, providing few tools to modify objects and yet it accomplishes better results than more expensive and complicated software. Together with trace paper, the schematic design process goes quickly, and the program's compatibility with standards drafting software allows for little time to be wasted in producing two-dimensional plans and elevations.

With all these new innovations, what will become of the cardboard model? My guess is that it will still be of use so long as computer model remains virtually existent int the hard-drive of a computer. Clients can understand a model over something that is almost unintuitive in the forms of architectural plans. Still, a computer-generated rendering is more seductive, can be more easily edited and obviously ship better than a physical model. I predict that physical models will eventually cease becoming the product of exacto knives and glue. In their place will arise models built by machines following computer generated architectural plans. Such technology is in its infancy now, but will soon permit designers to more productively test alternatives. The architect becomes more empowered when he or she does not have to worry about being a craftsman of miniatures.