Lies, Damned Lies

The elevators in our office building have these little monitors built into them, on which are displayed random tidbits of pseudo-news and other glossy distractions. Because god forbid we should be bored on the ride to the third floor.

Anyhow, the other day as I was leaving the office, the monitor was showing a little infographic that showed a steep decline in the number of hours per week Americans are working — from 40-point-something in 2009 to 34-point-something in 2011. The other person on the elevator and I looked at one another and both said “that can’t be right,” but there was no context other than “the number of hours per week Americans are working” and a series of numbers associated with years. We wondered at the time whether “Americans” meant all adult Americans or adult Americans with jobs, such that the steep decline indicated more people out of work. As I wandered off toward the subway, it hit me that even if the figures referred to adult Americans with jobs, the steep decline could indicate the growing part-time-ification of the workforce — in which case the drop suggests a growing under-employment problem, and not that Americans are opting to spend more time in the Hamptons.

This stupid infographic has annoyed me since I saw it, not least because it demonstrates everything that’s wrong with the ways that statistics get used by the mainstream media: Look! A number! It must Mean Something! What gets missed, of course, is that the gap between numbers and meaning can only be bridged by interpretation, and that such interpretation requires serious critical and analytical skills. That while numbers may have a demonstrable basis in empirical reality, what they mean is not at all evident, and many interpretations of them may simply be wrong.

All of this has me thinking about some of the claims that have been leveled at the digital humanities in recent days, in particular that it’s a mode of “processing” texts that attempts to bring literary studies fully into the empirical, by counting rather than reading. And sure, there is some work in the field whose results look awfully numeric. But by and large, while the wide range of work covered by the umbrella term “digital humanities” has one foot in the digital — in the kinds of tools and methods many have associated with scientific or engineering or other empirically-oriented fields — the best of it has the other squarely in the humanities, and in the kinds of questions and concerns and modes of analysis and interpretation that arise out of those fields. And it seems to me today that one key role for a “worldly” digital humanities may well be helping to break contemporary US culture of its unthinking association of numbers with verifiable reality, by demonstrating the ways that such numbers only open the process of meaning-making.

The Public Scholar’s Two Bodies

I started this blog as an assistant professor, under conditions that were never fully pseudonymous but were perhaps semi-veiled, at least by the fact that very few people knew me, and even fewer of those who did knew anything about blogs. All of my colleagues, that is to say, were looking in the other direction, and so I was able to say more or less what I wanted. Only gradually did this odd collection of writings and reflections come to be associated with a professionally known me.

Even after that, it seemed perfectly reasonable for the persona I inhabited on the blog to be a bit personal, to think through problems I was actually facing, to be at times a bit worried and not entirely secure. I was, after all, an assistant professor, in an online community composed primarily of other assistant professors, and thinking in public through the anxieties associated with that role was part of the point — we were using the blog format to demonstrate to one another that however isolated we may have felt, we were not alone.

Nearly ten years have gone by, however, and I’ve not only been tenured and promoted (twice!) but I’ve moved into a new position, one that calls on me to take on a new kind of leadership role. And those changes now have me reassessing the kinds of writing that I can — that I should — be doing in a space like this one. A post like yesterday’s, exploring some concern that I’ve got about my relationship to my work, can leave me feeling overexposed today in ways that it never would have eight years ago. Or even five years ago: even after I was tenured I felt that it was important to model a way of being an scholar that didn’t hide the messy process of working out ideas behind the polished completeness they eventually take on, that didn’t disavow the insecurities and anxieties of academic life in favor of a self-doubt-free public persona.

But in my new role, I’m increasingly aware that there are two Kathleen Fitzpatricks in the world: on the one hand, one that’s taken on a form of public service, that represents a large and important organization, that has a mission focused on something bigger than myself, and on the other, one that’s… just me. It’s something a bit more than the usual public/private divide; it’s a split between a self that speaks with a voice that’s larger than itself, and a self that seems always too small, too local, ever to be spoken for publicly.

And so while I still find myself wanting to push back against what I’ve always found to be a pretty gendered mode of being an academic — always projecting confidence, being convinced of one’s rightness, putting forward arguments that are never anything other than unimpeachable — and instead model a kind of self-questioning that I am convinced is necessary for real intellectual and personal growth, I now increasingly wonder whether I can or should continue do so as myself. There are questions to be asked about that mode of writing in and of itself, of course — is it possible to take on a project of open self-questioning without falling into an equally gendered mode of self-doubt and insecurity? — but there are also pressing concerns to be raised about whether the kinds of introspection the blog has long inspired in me can co-exist at all with the public role I have now chosen to occupy.

This wouldn’t be the first time I’ve announced an attempt to reconcile the blog with this public persona, and that I haven’t managed to do so yet bespeaks the difficulty of the project. But — in a stroke of what’s either meaningful irony or mere coincidence — I’m actually writing right now, for a public venue, about the importance of taking the work that gets done on scholarly blogs seriously. And that juncture, or disjuncture, depending on your view of it, has me thinking about the changing function of the public platform at the various stages of a career, the ways in which we all produce different voices at different moments, and the degree to which a coherent self can ever speak, or be spoken.

More Than Mere Polyester Would Suggest

Earlier this year, I attended a conference at which I was given a really nice fleece jacket. Really nice.

I’ve known for a while that I harbor a somewhat extreme love for this fleece jacket; it’s become my comfy home top layer of choice, getting some wear pretty much every day. But I hadn’t realized quite how important it was to me, I guess.

Because I was surprised this morning to awaken from a dream about the fleece jacket. I was on a very small plane, and there wasn’t room for my stuff in the cabin, so they took everything from me — including the fleece jacket — and put it in some exterior compartment of the plane, kind of like the luggage compartment of really big busses.

When we landed, what they handed back to me was not my fleece jacket. It looked like it, but it was a small, and I was pretty sure mine was a medium. I tried it on to check, and while the small did fit, it wasn’t quite as comfortable as I remembered my fleece jacket having been. And then I remembered the slight oddity about how my zipper pulls are attached, and realized that I could tell if this fleece jacket was mine by checking those pulls.

This was not my fleece jacket.

So I circled back around to where the rest of the passengers — maybe a hundred of them, which is weird considering how small that plane was — were waiting for their bags, and asked someone who had handed me this fleece jacket. Everybody pointed to one guy, who at first seemed to be about ten feet tall, and then appeared to be sitting up on a high shelf. I yelled up to him about the fleece jacket dilemma, and tossed up the one that wasn’t mine. He looked around and tossed me back down… a t-shirt. One I’d never seen before.

When I woke up, I was in negotiations with him to at least get back the small fleece jacket, if I couldn’t actually get mine. And was surprised by the level of relief I felt upon discovering that it was only a dream.

I am super curious what this fleece jacket — which I am wearing as I write — has come to stand in for in my unconscious. Whatever it is, I’m clinging to it pretty fiercely.

Soundtrack

One of the things that I find fascinating just about every time I travel around Europe is the music playing in the background in restaurants, bars, hotels, stores, and so forth. It’s not terribly surprising that a bunch of it is American pop music, of course, but I’m frequently caught off-guard by what American pop music is playing.

I wouldn’t pay it much attention at all, I think, if it were relentlessly current — the stuff that’s being pressed on all of us, all the time — but what I hear here is often oddly dated, and yet not anything that would fall into the category of obvious “classics” that could simply fade into the background. There was one summer in Paris, for instance, when we heard George Benson’s Give Me the Night everywhere we went. And not just one song off of the album, which might have rotated onto some weird retro playlist, but the entire album.

Here in Prague, it’s Tracy Chapman’s 1988 eponymous album. In one bar, it played start to finish, but I’ve also heard selections from it in at least three other places here, including our hotel’s lounge — and not just “Fast Car,” but several other singles as well.

What’s that all about? How is it that a 22 year old album rotates back into currency this far from its origins?

On the Impossibility of Naive Reading

The recent New York Times Opinionator column by Robert Pippin, “In Defense of Naive Reading”, has had me thinking for the last week or so. I knew I wanted to respond right away, but I wasn’t sure how, exactly; there’s an awful lot in the post that I’m quite sympathetic to, and yet something in it rubbed me exactly the wrong way.

Part of the irritation arises from the degree to which the humanities as they’ve been studied for the last several decades are under attack. Again. (Including from within.) Pippin himself begins with the culture wars of the 1980s, a grim reminder of the repeated cycles within which academic practices within the humanities, and particularly within literary studies, come under scrutiny, especially in times of economic crisis. There’s no doubt a degree of “here we go again” in my annoyed response.

But there’s more to it than that, because I think there’s more at work in Pippin’s critique than any kind of simple attack on those silly humanists. “In Defense of Naive Reading” bears deep connections to a proliferating set of arguments calling for a revaluation of amateur experiences of literary reading, arguments for which I have a tremendous amount of sympathy; Ted Striphas’s The Late Age of Print: Everyday Book Culture from Consumerism to Control and Jim Collins’s Bring on the Books for Everybody: How Literary Culture Became Popular Culture are two of the most thoughtful texts in this category. Even more broadly, however, Pippin’s argument connects to the anti-institutional “outta my way, prof!” rallying cry of Anya Kamenetz’s DIY U and YouTube’s An Open Letter to Educators. And it’s a precariously fine line from valorizing extra-academic reading experiences to dismissing scholarly work on literary subjects as wasteful, pointless, and worthy of elimination.

So I find myself in the somewhat perplexing position of wanting to make a strong argument on behalf of public engagement with the materials of humanities research, and especially literature, while at the same time defending the importance of scholarship in the field, including that scholarship that involves a kind of discourse of the sort that might exclude the uninitiated. I want to defend the kind of close reading that Pippin celebrates, but I also want to defend the theory that he dismisses. The question, of course, is how to do both of these things at once, which then turns into a larger question: What is the function of literary scholarship, and how does it inform or distinguish itself from reading-in-general?

A key aspect of literary scholarship, and the part that perhaps most informs what goes on in the literature classroom, has to do with making what seems to be obvious instead appear strange, to require the reader to step back from something that seems familiar and look at it from a new angle. The point is less to get the reader to think in some particular different way about the object than it is to get her to think differently about her own perspective with respect to that object.

And the key aspect of that endeavor is getting her to recognize that she has a perspective in the first place, one that is, by definition, non-neutral. And it’s this that makes me most want to argue with Pippin: not that I want to dismiss or displace the close, careful wrangling with primary texts, but instead to insist that no such reading can ever be naive, except in a not-so-faintly pejorative sense.

Every reading presupposes a theory, even where that theory is about the transparency of representation or about the existence of a text with defined borders. “Close reading” isn’t just careful reading with attention to detail; it’s a theoretical argument about where a text’s meaning is to be found, how it can be understood, and, perhaps most importantly, who is responsible for having put it there.

In that sense, the refusal of theory is not just a refusal of difficulty or abstruseness, but instead a refusal to lay perspective bare, or even to admit that there is perspective involved in the reading process in the first place. And lest it need be said: the admission of perspective in the reading process is not a slippery slope to some mythical anything-goes mode of postmodernist free-for-all. There is still evidence, analysis, and argument required in defending any particular interpretation of a text. But the point is that there is no singular, correct, perspective-free interpretation of a text.

In that sense, the value of literary theory has been in helping scholars and students tease out not how to read, but how they do read, how a lifetime of encounters with particular kinds of representations train us to understand future texts. And, not incidentally, to help students think about other potential readings, and what they might reveal about the default positions of our culture.

The challenge for literary scholars, I would argue, is not to return to the kind of naive, untheorized reading that Pippin seems to espouse, but instead to find ways to express the significance of theoretical insights to a wider audience. That is to say that we should neither dig in our heels on the issue of difficulty, nor give up the kinds of work that we have taken on, but instead that we need to find better ways to convey — to our students, of course, but even more importantly to the reading public at large — why the work we do matters, and why the ways that we do that work matter as well.

In a time of crisis such as we now face, dismissing that public as anti-intellectual would be an enormous mistake — but so would be giving up on the kinds of rigor that much theoretical discourse can produce. The trick lies in finding ways to bring a broader audience into our arguments, and finding ways to make those arguments that demonstrate to that audience why they should care about them, and about the future of our fields.

[P.S.: Just as I finish this, I see that my friend and colleague Kevin Dettmar has posted about the same article. Great minds, etc.]

Why I Can’t Wait to Get My Hands on My New iPad, All You Haters Notwithstanding

So yes, I did pre-order an iPad, or actually pre-reserve one with my college’s bookstore. And I intend to pick it up first thing tomorrow morning. And I absolutely cannot wait.

This is not a cool thing to admit in at least some of the circles I travel in. The open source/open content folks I know are understandably concerned about the iPad’s status as a tethered device, closed to programs and content not Apple-approved. I get that, and I’m concerned about it, too. At least for the couple of hours it’ll take before somebody posts a jailbreak for it, the iPad will be a closed system.

Except: there’s that web thing. While web apps on the iPhone haven’t been quite as flexible as one might like them to be, those difficulties have been due at least in part to the restrictions on browser window size, and in part due to the inconvenient crashiness of Safari. I have no sense, of course, that the latter problem will be fixed on the iPad, but the former certainly will be. And not having to use restricted mobile versions of web apps might change the game entirely; using Gmail in all its non-mobile glory might make me not care that it’s a web app. And as more and more of the stuff I do becomes browser-oriented, there’s decreasing cause for me to be concerned about the restrictions Apple places on the app store.

The other concern that many folks I know have voiced is that the iPad isn’t just tethered; in Jonathan Zittrain’s term, it’s appliancized. It’s a device primarily meant for consumption rather than production. And the more we allow our computers to devolve into appliances, the less likely they are to be generative devices, devices that allow for unexpected uses, for productive surprises, for hacking.

I agree with that logic, generally, but not as applied to the iPad in particular. The iPad is indeed primarily meant for consumption — which means that it can’t really replace the computer, and indeed shouldn’t. At least not yet, in any case; the iPad as it will be released tomorrow is a device that one can program for, but not yet a device that one can program on.

But that doesn’t mean that it will always be so. As Stephen Fry reminds us in his article in Time, the Mac was at its release “derided as a toy, a media poseur’s plaything and a shallow triumph of style over substance,” but the creativity that the Mac inspired transformed the landscape of personal computing; similarly, the iPhone was seen “as a plaything, but it transformed the smart-phone landscape.” None of us have any way of knowing what people will do with their iPads as yet, but don’t count ingenuity out. Engaging devices have a way of producing unexpected results.

I also take issue with the consumption/production divide that, as Matt Kirschenbaum pointed out this morning, is being reified by much of the technorati’s response to the iPad. On the one hand, I want to say “what’s so bad about consumption, anyhow?”; I’ve never been upset with my television for not allowing me to broadcast. And on the other hand, I also want to note the myriad ways that consumption has always led to production, has always been a necessary stage on the way to production. Writing is something we should all aspire to, but writing without reading is an impossibility. Devices that can provide for more engaging reading — and I mean that in the broadest sense, not just in the interaction with text but with images, audio, video, games — will inspire new kinds of writing, new kinds of creative production, in forms that we can’t as yet even imagine.

Play is inspiring. And as of tomorrow morning, I hope to be inspired, in new ways.

The Rise of the Landscape Web

I’ve noticed over the last couple of months that several of my favorite websites were becoming, well, wide. It’s become increasingly common, in fact, for me to find myself scrolling sideways as well as up-and-down when out there browsing, and frankly, it was getting to be a bit annoying.

But with my entry (yes, at last!) into the ranks of those who are getting to play with the Google Wave preview, it hit me: the fundamental orientation of the web is changing. And Wave may well cement that change.

Here’s the thing. Early web pages were composed vertically, in portrait layout, partially because of the limitations of screen width and partially because of the rear-view mirrorism that caused us to think about these new digital forms as “pages.” That concept has proven surprisingly sticky: web “pages” scroll vertically to this day, and very few sites have played with the horizontal axis.

Enter Google Wave, however (and possibly, as its necessary precursor, Google Chrome, though being a Mac user I can’t really speak to that at all).

wave

Its three-column orientation demands horizontality — if the columns are too narrow, you lose a lot of the toolbar options, and everything just feels out of proportion.

So this makes me wonder, if Wave gets the kind of buy-in that the hype suggests, whether we’re seeing the fundamental orientation of the web switching from portrait to landscape — not that we won’t still be scrolling vertically rather than horizontally, but that the basic screen unit will be wider than it is tall.

This has deep implications for contemporary web design, I think, and not least for me; the other Planned Obsolescence works quite well in a wide window: you can stretch the main text and comments columns to be as wide as you would like. But it doesn’t work well here at all, as I’ve been using a fixed-width theme, and that ugly gray background block at right just gets bigger and bigger.

I’ll be curious to see whether this shift becomes — no pun intended — broader. Is the basic assumption of web layout becoming landscape? How do we organize a wider window?

Something’s… Not… Right…

I went to bed last night about 11.30, and got up this morning around 7.30. And inbetween, didn’t receive a single piece of email. For some reason, I’m having a hard time accepting this — nothing from my listservs, nothing from my students, nothing from random spammers. Nothing. Why is it that eight hours of radio silence, over a Saturday night and into Sunday morning, has me convinced that something is wrong?

Transitions

I’m finding it extremely difficult this year to make the shift out of the fall semester and into everything I need to focus on over the winter break. Probably I should cut myself some slack about this, given that I filed my last grade for the semester at 5.30 this morning. But I’ve had some time over the last few days to begin thinking about the what-next stuff, and I haven’t exactly gotten myself focused, or even aimed in the right direction.

Part of the issue is the daunting nature of what I’ve got to accomplish over the break: I really need to make some serious headway on the actual production of actual text for the book; I need to get a lot done for MediaCommons*; I need to get one entirely new class prepared and one previously taught class heavily revised; I need to finish preparing for a Mellon workshop I’m hosting in a couple of weeks.

But it’s also the plethora of small details from this semester that are still hanging over my head: a peer review, a committee report, a search. (Okay, that last one’s not at all small.) And, in fact, these two factors — the daunting nature of the big tasks, and the proliferating nature of the small ones — combine to make the possibility of focus even more distant, as the small tasks provide a too-welcome distraction from the big ones, feeling more urgent, even though not important.

Here’s hoping for the — what does it require? will? — to keep centered on the important this break, and to find ways to recalibrate that sense of urgency.

—–

*1 Whoops! Forgot the footnote, which intended to say that MediaCommons is emerging from its persistent vegetative state. The old site (i.e., that which was current back in July) is back online, and the development of the new site is once again proceeding apace. Watch this space for more developments!

Hawaii, Day 1


the view from here
Originally uploaded by KF

R. and I are off on another of our famous working vacations, a phenomenon which makes my family (and many other folks as well) think we’re positively nuts. “You’re going to Hawaii in order to sit in front of your laptop and work?” they ask.

Well, yes.

The joy of these trips has a good bit to do with the ways a change of scenery, an escape from the usual pathways and the quotidian business of house- and cat- and job-care frees up the brain to focus on a project in a new way. And the beauty of Hawaii in particular for such a venture has to do — well, partly with the beauty of the scenery, but partly with the change of time as well as of place.

I got up this morning at 4.30 am, feeling pretty well-rested and ready to go. Sat down at the computer, and very quickly produced a six-page overview of the contours of the chapter I’m beginning to write, all the while watching the light gradually come up outside. It’s now 8.30 am, and I feel as though I’ve had a successful work day already, and can either continue plowing along or can move onto something else as I like.

Day 1, accomplished already. I’m feeling pretty good about where things go from here…