Reading, Privacy, and Scholarly Networks

Sarah Bond published a column on Forbes.com this morning on the importance of not for profit scholarly networks. I’m thrilled that she mentioned not only my blog post but also the work we’re doing at Humanities Commons. But if she hasn’t convinced you that it’s time to #DeleteAcademiaEdu yet, maybe this will: Friday, the network launched a new “prime” feature that allows members to pay to see the identities of users who are reading the work they share. That is to say: if you are reading things on Academia.edu, the network may sell your user info.

That they’re offering to sell this info to the author of the work involved does not make it okay. This is a frightening violation of the privacy standards that — a key point of comparison — libraries have long maintained with respect to reader activity. And selling your data to authors may only be the beginning.

I don’t want to read too much into the fact that they launched this “feature” on inauguration day. But the coincidence really begs scholars to become even more vigilant about where they’re sharing their work, and what networks they’re supporting as they access the work of others.

Academia, Not Edu

Last week’s close attention to open access, its development, its present state, and its potential futures, surfaced not only the importance for both the individual scholar and the field at large of sharing work as openly as possible, with a range of broadly conceived publics, but also some continuing questions about the best means of accomplishing that sharing. As I mentioned last week, providing opportunities for work to be opened at the point of publication itself is one important model, but a model that may well have occluded our vision of other potential forms: the ease of using article-processing charges to offset any decline in subscription revenue possible as previously paywalled content becomes openly available is so apparent as to have become rapidly naturalized, allowing us to wave off the need for experimentation with less obvious — and less remunerative — models.

Among alternative models, as I noted, is author-originated sharing of work, often in pre-print forms, via the open web. Many authors already share work in this way, whether posting drafts on their blogs for comment or depositing manuscripts in their institutional repositories. And recently, many scholars have also taken to sharing their work via Academia.edu, a social network that allows scholars to build connections, get their work into circulation, and discover the work of others. I’m glad to see the interest among scholars in that kind of socially-oriented dissemination and sharing, but I’m very concerned about this particular point of distribution and what it might mean for the future of the work involved.

Here’s the crux of the matter:

The first thing to note is that, despite its misleading top level domain (which was registered by a subsidiary prior to the 2001 restrictions), Academia.edu is not an educationally-affiliated organization, but a dot-com, which has raised millions in multiple rounds of venture capital funding. This does not imply anything necessarily negative about the network’s model or intent, but it does make clear that there are a limited number of options for the network’s future: at some point, it will be required to turn a profit, or it will be sold for parts, or it will shut down.

And if the network is to turn a profit, that profit has a limited number of means through which it can be generated: either academics who are currently contributing their work to this space will have to pay to continue to access it, or the work that they have contributed will somehow be mined for sale, whether to advertisers or other interested parties. In fact, Academia.edu’s CEO has said that “the goal is to provide trending research data to R&D institutions that can improve the quality of their decisions by 10-20%.” Statements like this underwrite Gary Hall’s assessment of the damage that the network can do to genuine open access: “Academia.edu has a parasitical relationship to the public education system, in that these academics are labouring for it for free to help build its privately-owned for-profit platform by providing the aggregated input, data and attention value.” The network, in other words, does not have as its primary goal helping academics communicate with one another, but is rather working to monetize that communication. All of which is to say: everything that’s wrong with Facebook is wrong with Academia.edu, at least just up under the surface, and so perhaps we should think twice before commiting our professional lives to it.

The problem, of course, is that many of us face the same dilemma in our engagement with Academia.edu that we experience with Facebook. Just about everyone hates Facebook on some level: we hate its intrusiveness, the ways it tracks and mines and manipulates us, the degree to which it feels mandatory. But that mandatoriness works: those of us who hate Facebook and use it anyway do so because everyone we’re trying to connect with is there. And as we’ve seen with the range of alternatives to Facebook and Twitter that have launched and quickly faded, it’s hard to compete with that. So with Academia.edu: I’ve heard many careful, thoughtful academics note that they’re sharing their work there because that’s where everybody is.

And the “everybody” factor has been a key hindrance to the flourishing of other mechanisms for author-side sharing of work such as institutional repositories. Those repositories provide rigorously protected and preserved storage for digital objects, as well as high-quality metadata that can assist in the discovery of those objects, but the repositories have faced two key challenges: first, that they’ve been relatively siloed from one another, with each IR collecting and preserving its own material independently of all others, and second, that they’ve been (for the obvious reason) institutionally focused. The result of the former is that there hasn’t been any collective sense of what material is available where (though the ARL/AAU/APLU-founded project SHARE is working to solve that problem). The result of the latter is that a relatively small amount of such material has been made available, as researchers by and large tend to want to communicate with the other members of their fields, wherever they may be, rather than feeling the primary identification with their institutions that broad IR participation would seem to require. So why, many cannot help but feel, would I share my work in a place where it will be found by few of the people I hope will read it?

The disciplinary repository may provide a viable alternative — see, for instance, the long-standing success of arXiv.org — but the fact that such repositories collect material produced in disciplines rather than institutions is only one of the features key to their success, and to their successful support of the goals of open access. Other crucial features include the not-for-profit standing of those repositories, which can require thoughtful fundraising but keeps the network focused on the researchers it comprises, and those repositories’ social orientation, facilitating communication and interconnection among those researchers. That social orientation is where Academia.edu has excelled; early in its lifespan, before it developed paper-sharing capabilities, the site mapped relationships among scholars, both within and across institutions, and has built heavily upon the interconnections that it traced — but it has not primarily done so for the benefit of those scholars or their relationships.

Scholarly societies have the potential to inhabit the ideal point of overlap between a primary orientation toward serving the needs of members and a primary focus on facilitating communication amongst those members. This is in large part why we established MLA Commons, to build a not-for-profit social network governed and developed by its members with their goals in mind. And in working toward the larger goals of open access, we’ve connected this social network with CORE, a repository through which members can not only deposit and preserve their work, but also share it directly with the other members of the network. We’re also building mechanisms through which CORE can communicate with institutional repositories so that the entire higher-education-based research network can benefit.

Like all such networks, however, the Commons will take time to grow, so we can’t solve the “everybody” problem right away. But we’re working toward it, through our Mellon-supported Humanities Commons initiative, which seeks to bring other scholarly societies into the collective. The interconnections among the scholarly society-managed Commonses we envision will not only help facilitate collaboration across disciplinary lines but also allow members with overlapping affiliations to have single sign-on access to the multiple groups of scholars with whom they work. We are working toward a federated network in which a scholar can maintain and share their work from one profile, on a scholar-governed network, whose direction and purpose serve their own.

So, finally, a call to MLA members: when you develop your member profile and share your work via the Commons, you not only get your work into circulation within your community of practice, and not only raise the profile of your work within that community, but you also help support us as we work to solve the “everybody” problem of the dot-com that threatens to erode the possibilities for genuine open access.

Evolving Standards and Practices in Tenure and Promotion Reviews

The following is the text of a talk I gave last week at the University of North Texas’s Academic Leadership Workshop. I’m hoping to develop this further, and so would love any thoughts or responses.

I’m happy to be here with you today, to talk a bit about evolving standards and practices in promotion and tenure reviews. Or, perhaps, about the need to place pressure on those standards and practices in order to get them to evolve. Change comes slowly to the academy, and often for good reason, but we find ourselves at a moment in which uneven development has become a bit of a problem. Some faculty practices with respect to scholarly work have in recent years changed faster than have the ways that work gets evaluated. If we don’t make a considered effort to catch our review processes up to our research and communication practices, we run the risk of stifling innovation in the places we need it most.

I want, however, to start by noting that most of what I am proposing here is intended to open a series of issues for discussion, rather than presenting a set of answers to the problems. Every university, every field, indeed, every tenure case brings different needs and expectations to the review process; it’s only in teasing out those needs and expectations that you can begin to craft a set of guidelines that will adequately represent your campus’s values and yet be supple enough to continue to represent those values in the years to come.

So first, a bit of recent history around these issues, before I move to the kinds of issues I believe we need to be considering with respect to tenure processes. In 2002, then-president Stephen Greenblatt sent a letter to the 30,000 members of the Modern Language Association, alerting them to a coming crisis in tenure review processes. The failing fiscal model under which university presses operate, he noted, was resulting in the publication of fewer and fewer scholarly monographs, a reduction being felt most acutely in the area of first books; as a result, work that was of perfectly high quality but that did not present an obvious market value was in danger of not finding a publisher. Unless departments were willing to think differently about their review processes, recognizing what the “systemic” obstacles facing all of scholarly communication, Greenblatt argued,

people who have spent years of professional training — our students, our colleagues — are at risk. Their careers are in jeopardy, and higher education stands to lose, or at least severely to damage, a generation of young scholars.

In considering what might be done, Greenblatt noted that

books are not the only way of judging scholarly achievement. Should our departments continue to insist that only books and more books will do? We could try to persuade departments and universities to change their expectations for tenure reviews: after all, these expectations are, for the most part, set by us and not by administrators. The book has only fairly recently emerged as the sine qua non and even now is not uniformly the requirement in all academic fields. We could rethink what we need to conduct responsible evaluations of junior faculty members.

There are some things that might bring one up short here: for instance, Greenblatt’s acknowledgement that the book is not “uniformly the requirement in all academic fields” of course masks the degree to which the book-based fields are outliers in the academy today. But nonetheless, those fields’ reliance on the book as the gold standard for tenure was becoming, for a host of reasons, problematic, and thus Greenblatt urged departments to reconsider their review practices, as they

can no longer routinely expect that the task of scholarly evaluation will be undertaken by the readers for university presses and that a published book will be the essential stamp of a young scholar’s authenticity and promise.

Departments, in other words, must step forward and establish means of determining for themselves where appropriate evidence of a young scholar’s “authenticity and promise” lies.

In the years following this letter, the MLA created a task force charged with examining the current state of tenure standards and practices and making recommendations for their future. That task force issued its final report in December 2006, presenting a list of 20 recommendations, supported by nearly 60 pages of data and analysis. Their recommendations included things like:

The profession as a whole should develop a more capacious conception of scholarship by rethinking the dominance of the monograph, promoting the scholarly essay, establishing multiple pathways to tenure, and using scholarly portfolios.

And:

Departments and institutions should recognize the legitimacy of scholarship produced in new media, whether by individuals or in collaboration, and create procedures for evaluating these forms of scholarship.

And:

Departments should conduct an in-depth evaluation of candidates’ dossiers for tenure and promotion at the departmental level. Presses or outside referees should not be the main arbiters in tenure cases.

And:

Departments and institutions should facilitate collaboration among scholars and evaluate it fairly.

None of these recommendations seem terribly controversial, and yet here we find ourselves. Over seven years have passed since the task force report; nearly twelve have passed since the Greenblatt letter. And by and large, standards and practices in tenure and promotion reviews have changed but little. By and large, the book remains the gold standard in what are still referred to as “book-based fields,” and departments still find themselves stymied when it comes to evaluating digital work, collaborative work, public work, and the like.

It would not be unreasonable to ask whether Stephen Greenblatt was simply being a bit alarmist in the sense he conveyed of an impending crisis for the faculty. We do not appear to be surrounded by a lost generation of scholars whose prospects were damaged by our continued adherence to the book standard in the face of university press cutbacks — and yet, given that faculty who are not awarded tenure leave our midst, and that we are not haunted by the lingering specters of unpublished books, it’s possible that the damage is nonetheless being inflicted, just in a way that we are able to keep outside our awareness. One clear, if anecdotal effect of our refusal to change, however, may well be precisely how much we have stayed the same; I have been told by several junior faculty members, and have heard the same thing at second-hand from many others, about having been counseled by senior colleagues that they should put aside their more experimental projects and focus on the traditional monograph until tenure is assured. The counselor is generally well-intentioned, wanting to help his or her junior colleague have as frictionless an experience of the review process as possible. But the outcome, too often, is that the junior faculty member is either made risk-averse or, in a positive sense, ushered into the more reliable reward channels of the ways that things are usually done, and as a result never returns to the transformative work originally imagined. And that disciplinary lockdown then gets transmitted on to the next round of junior colleagues. And so we continue, as a field, to rely on the monograph as the gold standard for tenure, and we continue to find ourselves baffled by the prospect of evaluating anything else.

This is not to say, however, that there has not been change — even in the most hide-bound of book-based fields — over the last twelve years. Scholars today are communicating with one another and making their work public in a range of ways that were only beginning to flicker into being in 2002. Many faculty maintain rich scholarly blogs, either on their own or as part of larger collectives, through which they are publishing their work; others are working on a range of small- and large-scale corpus building, datamining, mapping, and visualization projects, all of which seek to present the results of scholarly research and engagement in rich interactive formats. Projects in a wide range of digitally-inflected fields across the humanities, sciences, and social sciences are both using and developing a host of new methodologies, both for research and for the communication of the results of that research. And these projects are not just transforming their fields, but also creating a great deal of interest in scholarly work amongst the broader public.

And yet I visited a university last fall whose form for the annual professional activities report asks faculty members to list their (1) book publications, (2) peer-reviewed journal articles, (3) major conference presentations, and so on, finally getting to “web-based projects” somewhere just above volunteer service in the community. It’s just a form, of course, but in that form is inscribed the hierarchy of what we value, as evidenced by what we actually reward in our evaluation and merit review processes. And if we are going to take web-based work as seriously as traditionally published work, we need to manifest that in those reward systems.

However, I do want to be clear about something: What I am arguing here today is not that digital projects of whatever variety should be treated as the equivalent of a book or a journal article. In fact, attempting to draw those equivalences can get us into trouble, as digital work demands its own medium-specific modes of assessment. Digital projects are often radically open, both in their mode of publication and their mode of peer review; they are often process-oriented, without a clear moment of “publication” or a clear completion date; they are very frequently code-based, and often non-linear, in ways that require that they be experienced rather than simply read. And too often review processes eliminate that possibility; not only do our forms rank web-based work as unimportant, but our processes require that such work be printed out and stuck in a binder. This is clearly counter-productive; we cannot continue evaluating new kinds of work as if it has been produced and can be read just like the print-based work we’re accustomed to.

But what I’m after here is not a new set of equally rigid processes that better accommodate the particularity of the digital. Rather, our review processes need to develop a new kind of flexibility — in no small part because developing a set of criteria that perfectly deals with all of today’s forms of scholarly communication will in no way prepare you for tomorrow, or next year. The fact of the matter is that scholarly communication itself is in a period of profound change, profound enough that change itself is the only certainty. And so we need guidelines that will enable the faculty and the administration together to locate the core values that we share and to establish processes that will take each case on its own terms, while nonetheless proceeding in ways that can be fairly applied to all cases.

In considering such a transformation, I believe that we need to begin by thinking differently about what it is we’re doing in the tenure review process in the first place. We have long treated the tenure review, and to a lesser extent the review for promotion to full, as a threshold exercise: an assessment of whether the candidate has done enough to qualify. The result, I believe, is burnout and disgruntlement in the associate rank. There’s a reason, after all, why The Onion found this funny, and it’s not just about the privileges of lifetime tenure producing entitled slackers.

onion

Assistant professors run the pre-tenure period as a race and, making it over the final hurdle, too often collapse, finding themselves exhausted, without focus or direction, depressed to discover that what is ahead of them is only more of the same. The problem is not the height of the hurdles or the length of the track; it’s the notion that the pre-tenure period should be thought of as a race at all, something with a finish line at which one will either have won or lost, but will in any case be done. I believe that we can find a better means of supporting and assessing the careers of junior faculty if we start by approaching the tenure review in a different way entirely, thinking of it not as a threshold exercise but instead as a milestone, a moment of checking in with the progress of a much longer, more sustained and sustainable career.

Here’s the thing: We hire candidates with promise, expecting that their careers will be productive over the long term, that they will engage with their material and their colleagues, and that they will come to some kind of prominence in their fields. The tenure review, at the end of the first six years of those careers, should ideally not be a moment of determining whether those candidates have thus far done X quantity of work (that is, that they have done enough to earn tenure, and can safely rest), but rather of asking whether the promise with which those candidates arrived is beginning to bear out. Let me say that again: beginning to bear out. The question we are asking, at tenure, should not be whether the full potential of a candidate has been achieved, but whether what has been done to this early point in a career gives us sufficient confidence in what will happen over the long haul that we want the candidate to remain a colleague for as long as possible. In order to figure that out, the questions we ask about the work itself should not — or at least should not only — be about its quantity; rather, we should focus on its quality. And there are a couple of different ways of thinking about and assessing that quality: first, through the careful evaluation by experts in the candidate’s field, and second, through an exploration of the evidence of the impact the candidate’s work is having in his or her field.

Such a focus on impact might help us more fairly evaluate the new kinds of digital projects that many scholars today are engaged in. But they also might encourage us to reassess a range of forms of work that have not been adequately credited in recent years. In fact, I would argue that the reforms that we need in our tenure review processes are not just about accommodating the digital at all. We also need to acknowledge and properly value forms of intellectual labor that have long been done by the faculty but that for whatever reason have gone undervalued. In my own area of the humanities, such work includes translation, or the production of scholarly editions, or the editing of scholarly journals. These are forms of work that have long been part of academic production, but that have by and large been treated as “service to the field.” And yet — just to pick up one of those examples — what more powerful position in shaping the direction of a field is there than that of the journal editor? The impact of such an editor across his or her term is likely to have a far greater and far longer-lasting influence on his or her area of study than any monograph might produce — and yet only the monograph, in most institutions, will get you promoted.

This is just one of the kinds of problems that we need to encounter. But again, I want to emphasize that it’s not enough simply to add “digital work” or “journal editing” to the list of kinds of work that we accept for tenure and promotion, not least because the impulse then is to apply current standards to those objects: are there kinds of journals that “count,” and kinds that don’t? Does the journal have to have a specified impact factor? I’m sure you can imagine more such questions — questions that I’m convinced lead us in the wrong directions, toward increasing rigidity rather than flexibility. Instead, I want to head off in a different direction. In the rest of my time this morning, I want to sketch out a few of the ways that our thinking about the review process might change in order to help produce the results we’re actually aiming for. The new ways of thinking that I’m urging today may require us to give up our reliance on some relatively easy, objective, quantitative measures, in favor of seeking out more complex, more subjective qualitative judgments — but I would suggest that these kinds of complex judgments about research in our fields are the core of our job as scholars, and that we have a particular ethical obligation to take our responsibility for such judgments seriously when they determine the future of our colleagues’ careers. This different direction will also require us to think as flexibly as we can about how our practices should not only change now, but continue to evolve as the work that junior scholars produce changes.

So, I want to float a number of principles meant to instigate some new ways of thinking about the tenure standards and processes of the future. Though these are pitched as imperatives, they are not specific practices, but rather considerations for the creation of practices. First:

(1) Do not let “but we don’t know how to evaluate this kind of work” stand as a reason not to evaluate it.

Many disciplinary organizations have been hard at work developing criteria for evaluating new kinds of scholarly work. For instance, the MLA’s Committee on Information Technology developed such a set of best practices for the evaluation of digital work in MLA fields back in 2000, and has recently updated those guidelines. The CIT has further created an evaluation wiki, which includes information such as a breakdown of types of digital work. And, perhaps most importantly, the CIT has led a series of workshops before the annual convention designed to give department and campus leaders direct experience of the kinds of questions that need to be asked about digital work, and the ways that such evaluation might proceed. In conjunction with that workshop, the CIT has produced a toolkit. And the MLA also has guidelines for the evaluation of translations, and guidelines for the evaluation of scholarly editions, among other such guidelines.

And this is just the MLA. Other scholarly organizations have done similar work on the sorts of nontraditional projects that are appearing in their own fields. And several universities have developed their own policies for how such work should be evaluated, including Emory University and the University of Nebraska at Lincoln.
So there are excellent criteria out there that can be used in evaluating many non-standard kinds of scholarly work. Review bodies, from the department level up to the university level, must familiarize themselves with those criteria and put them to use in their evaluations.

(2) Support evaluator learning.

Despite the existence of these excellent criteria for evaluating new work, however, many faculty, especially those who have long worked in exclusively traditional forms, need support in beginning to read, interpret, and engage with digital projects and other new forms of scholarly project. This need is of course what led the MLA’s committee on information technology to hold its pre-convention workshops; similar kinds of workshops have been held at the summer seminars of the Association of Departments of English and the Association of Department of Foreign Languages, and at NEH-funded summer workshops. On the local level, you might enlist the scholars on your campus who are doing digital work or other forms of nontraditional scholarship in leading similar workshops for the faculty and administrators who play key roles in the tenure review process.

(3) Engage with the work on its own terms, and in its own medium.

Supporting evaluators in the process of learning how to engage with new kinds of work is crucial precisely because the work under review must be dealt with as it is, as itself. If I could wave my magic wand and eliminate one bit of practice in tenure and promotion evaluations, it would probably be the binder. More or less every year I hear reports from scholars whose work is web-based but who have been asked to print out and three-hole-punch that work in order to have it considered as part of their dossiers. Needless to say, eliminating the interaction involved in web-based projects undermines the very thing that makes them work. As the MLA guidelines frame it, “respect medium specificity” — engage with new work in the ways its form requires.

(4) Dance with the one you brought.

In the same way that the work demands to be dealt with on its own terms, it’s crucial that tenure review processes engage with the candidates we’ve actually hired, rather than trying to transform them into someone else. While it’s tempting to advise junior scholars to take the safer road to tenure by adhering to traditional standards and practices in their work, such advice runs the risk of derailing genuinely transformative projects. Particularly when candidates have been hired into positions focused on new forms of research and teaching, or when they have been hired because of the exciting new paths they’re creating, those candidates must be supported in their experimentation. In creating that support, it’s particularly important to guard against doubling the workload on the candidate by requiring them both to complete the project and to publish about the project, or worse, to complete the project and do traditional work as well. This is a recipe for exhaustion and frustration; candidates should be encouraged to focus on the forms of their work that present the greatest promise for impact in their fields.

(5) Prepare and support junior faculty as they “mentor up.”

My emphasis on supporting the candidates that you have doesn’t mean those candidates won’t need to persuade their senior colleagues of the importance of their work. Scholars working in innovative modes and formats must be able to articulate the reasons for and the significance of their work to a range of traditional audiences — and not least, their own campus mentors. In theory, at least, this is the case for all scholars; it’s the purpose that the “personal statement” in the tenure dossier is meant to serve. For scholars working in non-traditional formats, however, there is additional need to explain the work to others, and to give them the context for understanding it. That process cannot begin with, but rather must culminate in, the personal statement. Throughout the pre-tenure period, candidates should be given opportunities to present their work to their colleagues, such that they have lots of experience explaining their work — and ample responses to their work — by the time the tenure review begins. They also need champions — mentors who, having examined the work and coming to understand its value, will help them continue to “mentor up” by arguing on behalf of that work amongst their colleagues.

(6) Use field-appropriate metrics.

Every field has its own ways of measuring impact, and the measures used in one field will not automatically translate to another. A colleague of mine whose PhD is in literature, and who began her career as a digital humanist, now holds a position that is half situated in an English department and half in an information science department. Her information science colleagues, in beginning her tenure review, calculated her h-index — and it was abysmal. The good news is that her colleagues then went on to calculate the h-indexes of the top figures in the digital humanities, and discovered that they were all equally terrible. Metrics like the h-index, or citation counts, or impact factors simply do not apply across all fields. It’s absolutely necessary that we recognize the distinctive measures of impact used in specific fields and assess work in those fields accordingly.

(7) Maybe be a little suspicious of counting as an evaluation method.

We tend to like numbers in our assessment processes. They feel concrete and objective, and some of them are demonstrably bigger than others. The problem is that we tend only to count those things that are countable, and too often, if it can’t be counted, it doesn’t count. But as qualitative social scientists — much less humanists — would insist, there is an enormous range of significant data that cannot be captured or understood quantitatively. Citation counts, for instance: such metrics can tell us how often an article has been referred to in the subsequent literature, but they can’t tell us whether the article is being praised or buried through those citations, whether it’s being built upon or whether it’s being debunked. So while I’m glad that problematic metrics like journal impact factor are gradually being replaced with a more sophisticated range of article-level metrics, I still want us to be a bit cautious about how we use those numbers. This includes web-based metrics: hits and downloads can be really affirming for scholars, but they don’t necessary indicate how closely the work is being attended to, and they aren’t comparable across fields and subfields of different sizes. If we’re going to use quantitative metrics in the review process, they need careful interpretation and analysis — and even better, should be accompanied by a range of qualitative data that captures the reception and engagement with the candidate’s work.

(8) Engage appropriate experts in the field to evaluate the work.

It is, by and large, the external reviewers that we have relied upon to produce the qualitative assessment of the tenure dossier. These experts are generally well-placed, more senior members of the candidate’s subfield who are asked to evaluate the quality of the work on its own terms, as well as the place that work has within the current discourses of the subfield. Where candidates present dossiers that include non-traditional work, however, we must seek out external reviewers who are able to evaluate not just the work’s content — as if it were the equivalent of a series of journal articles or a monograph — but also its formal aspects. These experts can and should also uncover and evaluate the specific evidence of the work’s impact within the field. In the last couple of years, a couple of colleagues and I have all had the experience of being asked to undertake a review of a tenure candidate’s digital work, and have been specifically asked by those campuses to account for the technical value of that work and the significance that it has for the field. This kind of medium-specific review is, I would argue, necessary for all forms of nontraditional work: a candidate whose dossier includes translation should have at least one qualified external reviewer asked to focus on the significance of the translation; a candidate whose dossier includes journal editing should have at least one qualified external reviewer asked to focus on the significance of that editorial work for the field.

(9) But do not overvalue the judgments of those experts.

The external reviewers that are engaged by a department or a college to assess the work of a candidate are often the best place to evaluate the quality of that work, its place within the subfield, its significance and reception, and the like. But all too often these reviewers are called upon — or take it upon themselves — to make judgments that are outside the scope of their expertise. It would be best for us to refrain from asking, or even specifically enjoin, reviewers from indicating whether a candidate’s work would merit tenure at their institution, or whether a candidate is among the “top” scholars in their field. Such comparisons rely on false equivalences among institutions and among scholars, and they are invidious at best.

Even more, departments must use the judgments of those experts to inform their own judgment, and not supplant it. Departments know the internal circumstances and values of the institution in ways that external reviewers cannot. And while the members of a departmental tenure review body might not be experts in a candidate’s specific area of interest, bringing in such experts cannot be used absolve them of responsibility for exercising their own judgments, including engaging directly with the candidate’s work themselves.

(10) Avoid (or at least beware) the false flag of “objectivity.”

The desire to externalize judgment — whether by relying upon quantitative metrics or on the assessments of external reviewers — is understandable: we want our processes to be as uncontroversial, as scrupulous, and therefore as objective as possible. And there are certain subjective judgments — such as those around questions of “collegiality” or “fit” — that should not have any place in our review processes. But aside from those issues, we must recognize that all judgment is inherently subjective. It is only by surfacing, acknowledging, and questioning our own presuppositions that we can find our way to a position that is both subjective and fair. This is a kind of work that scholars — especially those in the qualitative social sciences and the humanities — should be well equipped to do, as it’s precisely the kind of inquiry that we bring to our own subject matter. And in this line, I want to note that the external judgments that we seek from outside reviewers are no more objective than are our own. If anything, external reviewer testimony itself requires the same kind of judgment from us as does the rest of the dossier.

Moreover — and I have a whole other 45-minute talk focusing on this issue — we need to acknowledge that “peer review” is not itself an objective practice, and therefore an objective marker of quality research. And there isn’t just one appropriate way for peer review to be conducted. Many publications and projects are experimenting with modes of review that are providing richer feedback and interaction than can the standard double-blind process; it’s crucial that those new modes of review be assessed on their own merits, according to the evidence of quality work that they produce, and not dismissed as providing insufficiently objective criteria for evaluation.

(11) Reward — or at least don’t punish — collaboration.

Along those lines: I have been told by members of university promotion and tenure committees that an open peer review process, or other forms of openly commentable work, would doom a tenure candidate because anyone who participated in that process would be excluded as a potential external reviewer. The intent again is objectivity: any scholar who has had any contact with the candidate’s work, or has engaged in any communication with the candidate, or has participated in any projects with the candidate, could not possibly be “objective” enough to evaluate the work.

This is on the one hand the kind of adherence to the false flag of objectivity that I think we need to get away from, and on the other a highly destructive misunderstanding of the nature of collaboration in highly networked fields today. I understand the impulse, to ensure that the judgment provided by an external reviewer is as focused on the work as possible, without being colored by a personal relationship. But there are degrees, and we need to be able to make distinctions among them. At my own prior institution, the line was one about personal benefit: if potential external reviewers stand to gain directly in their own careers from a positive outcome in the review process — a dissertation director who becomes more highly esteemed the more highly placed his former advisees are; a co-author whose work gains greater visibility the more her partner’s career advances, and so forth — such reviewers should obviously not be engaged. But other levels of interaction should not disqualify reviewers, including co-participants in conference sessions, commenters on online projects, and so forth. We need to recognize that a key component of impact on a field is about those kinds of connection: we should want tenure candidates to be developing active relationships with other important members of their fields, to be working with them in a wide variety of ways. Such relationships should be disclosed in the review process, but they should not be used to eliminate the reviewers who might in fact be the best placed to assess the candidate’s work.

The key thing, again, is that the tenure review should be focused on assessing the impact that the candidate’s work is beginning to have on its field, and the confidence that impact to this point gives you about the importance of the work to come. Each aspect of the standards and processes that you bring to the tenure review process should be reconsidered in that light: are the measures you are using, the evaluators you are engaging, the ways the work is being read or experienced, are all of these aspects producing the best possible way of assessing a career in process, and the most responsible way of considering its future.

I want to close with one crucial question that remains, however, and it’s a big one. This process of change is huge, and wide-ranging, and it strikes at the heart of academic values. Who will lead it? I do not know the situation at your institution well enough to say that this is definitively true here, but I will say that I know of institutions where administrative initiatives to reform processes like tenure and promotion reviews are met with faculty resistance to having standards imposed on them, and yet when faculty are tasked with beginning such reform they often disbelieve that the administration will listen to them. Which is to say that among the things that has to be done in this process is creating an atmosphere of trust and collaboration between faculty and administration, such that the work can — and will — be done together.

This is not an easy. It’s not just a matter of changing a few phrases in the current guidelines to permit consideration of new kinds of work. But I firmly believe that a real investment in envisioning a new set of tenure standards and practices can have a transformative effect on our campuses, opening discussions about scholarly values, promoting innovations in both research and teaching, and supporting the new ways that scholars are connecting not just with one another but with the broader public as well. I look forward to hearing your thoughts about how such a process might go forward.

Disagreement

Tim McCormick posted an extremely interesting followup to my last post. If you haven’t read it, you should.

My comment on his post ran a bit out of control, and so I’m reproducing it here, in part so that I can continue thinking about this after tonight:

This is a great post, Tim. Here’s the thing, though: this is exactly the kind of public disagreement that I want the culture of online engagement to be able to foster; it is, as you point out, respectful, but it’s also serious. The problem is that I think this kind of dissensus is in danger as long as our mode of discourse falls so easily into snark, hostility, dismissiveness, and counterproductive incivility.

I don’t think it’s accidental that we are having this discussion via our blogs. I had time to sit with my post before I published it. You had time to read it and think about it before you responded. I’ve had time to consider this comment. And not just time — both of us have enough space to flesh out our thoughts. None of this means that by the end of the exchange we’re going to agree; in fact, I’m pretty sure we won’t. But it does mean that we’ve both given serious thought to the disagreement.

And this is what has me concerned about recent episodes on Twitter. Not that people disagree, but that there often isn’t enough room in either time or space for thought before responding, and thus that those responses so easily drift toward the most literally thoughtless. I’m not asking anybody not to say exactly what’s on their minds; by all means, do. I’m just asking that we all think about it a bit first.

And — if I could have anything — it would be for all of us to think about it not just from our own subject positions, but from the positions of the other people involved. This is where I get accused of wanting everybody to sit around the campfire and sing Kumbaya, which is simply not it at all. Disagree! But recognize that there is the slightest possibility that you (not you, Tim; that general “you” out there) could be wrong, and that the other person might well have a point.

So in fact, here’s a point of agreement between the two of us: you say that we need to have “the widest possible disagreements,” and that “to be other-engaged, and world-engaged, we need to be always leaning in to the uncomfortable.” Exactly! But to say that, as a corollary, we have to permit uncivil speech, public insult, and shaming — that anyone who resists this kind of behavior is just demanding that everyone agree — is to say that only the person who is the target of such speech needs to be uncomfortable, that the person who utters it has no responsibility for pausing to consider that other’s position. And there, I disagree quite strongly. (As does, I think, Postel; being liberal in what you accept from others has to be matched by being conservative in what you do for the network to be robust.)

I do not think that it should be the sole responsibility of the listener to tune out hostility, or that, as a Twitter respondent said last night, that it’s the responsibility of one who has been publicly shamed simply to decide not to feel that shame. There’s an edge of blaming the victim there that makes me profoundly uncomfortable. But I do think that we all need to do a far better job of listening to one another, and of taking one another seriously when we say that something’s just not okay. That, I think, is the real work that Ryan Cordell did in his fantastic blog post this morning. It’s way less important to me what the specific plan he’s developed for his future Tweeting is (though I think it’s awesome); it’s that he took the time to sit down with a person he’d hurt and find out what had happened from her perspective. It’s not at all incidental that they walked away from their conversation still disagreeing about the scholarly issues that set off their exchange — but with what sounds like a deeper respect for one another as colleagues.

This has all become a bit heavier than I want it to be. I have no interest in becoming the civility police. Twitter is fun, and funny, and irreverent, and playful, and I want it to stay that way. But I really resist the use of shame as a tool of either humor or criticism. Shame is corrosive to community. It shuts down discussion, rather than opening it up. And that’s my bottom line.

If You Can’t Say Anything Nice

Folks, we need to have a conversation. About Twitter. And generosity. And public shaming.

First let me note that I have been as guilty of what I’m about to describe as anyone. You get irritated by something — something someone said or didn’t say, something that doesn’t work the way you want it to — you toss off a quick complaint, and you link to the offender so that they see it. You’re in a hurry, you’ve only got so much space, and (if you’re being honest with yourself) you’re hoping that your followers will agree with your complaint, or find it funny, or that it will otherwise catch their attention enough to be RT’d.

I’ve done this, probably more times than I want to admit, without even thinking about it. But I’ve also been on the receiving end of this kind of public insult a few times, and I’m here to tell you, it sucks.

I am not going to suggest in what follows that there’s no room for critique, even on Twitter, and that we all ought to just join hands and express our wish for the ability to teach the world to sing. But I do want to argue that there is a significant difference between thoughtful public critique and thoughtless public shaming. And if we don’t know the difference, we — as a community of scholars working together online, whose goals are ostensibly trying to make the world a more thoughtful place — need to figure it out, and fast.

There are two problems working in confluence here, as far as I can tell. One is about technological affordances: Twitter’s odd mixture of intimacy and openness — the feeling that you’re talking to your friends when (usually, at least) anyone could be listening in — combined with the flippancy that often results from enforced, performative brevity too frequently produces a kind of critique that veers toward the snippy, the rude, the ad hominem.

The other problem is academia. As David Damrosch has pointed out in another context, “In anthropological terms, academia is more of a shame culture than a guilt culture.” Damrosch means to indicate that academics are more likely to respond to shame, or the suggestion that they are a bad person, than to guilt, or the indication that they have done a bad thing. And he’s not wrong: we all live with guilt — about blown deadlines or dropped promises — all the time, and we so we eventually become a bit inured to it. But shame — being publicly shown up as having failed, in a way that makes evident that we are failures — gets our attention. That, as Damrosch notes, is something we’ll work to avoid.

And yet, it’s also something that we’re more than willing to dole out to one another. There’s a significant body of research out there — some of my favorite of it comes from Brené Brown — that demonstrates the profound damage that shame does not only to the individual but to all of the kinds of relationships that make up our culture. Not least among that damage is that, while a person who feels guilty often tries to avoid the behavior that produced the feeling, a person who feels shame too often responds by shaming others.

So, we’ve got on the one hand a technology that allows us, if we’re not mindful of how we’re using it, to lash out hastily — and publicly — at other people, for the amusement or derision of our followers, and on the other hand, a culture that too often encourages us to throw off whatever shame we feel by shaming others.

Frankly, I’ve grown a little tired of it. I’ve been withdrawing from Twitter a bit over the last several months, and it’s taken me a while to figure out that this is why. I am feeling frayed by the in-group snark, by the use of Twitter as a first line of often incredibly rude complaints about products or services, by the one-upsmanship and the put-downs. But on the other hand, I find myself missing all of the many positive aspects of the community there — the real generosity, the great sense of humor, the support, the engagement, the liveliness. Those are all way more predominant than the negative stuff, and yet the negative stuff has disproportional impact, looming way larger than it should.

So what I’m hoping is to start a conversation about how we might maximize those positive aspects of Twitter, and move away from the shame culture that it’s gotten tied to. How can we begin to consider whether there are better means of addressing complaints than airing them in public? How can we develop modes of public critique that are rigorous and yet respectful? How can we remain aware that there are people on the other end of those @mentions who are deserving of the same kinds of treatment — and subject to the same kinds of pain — that we are?

“Neoliberal”

I have come to despise the term “neoliberal,” to the extent that I’d really like to see it stricken from academic vocabularies everywhere. It’s less that I have a problem with the actual critique that the term is meant to levy than with the utterly sloppy and nearly always casually derisive way in which the term is of late being thrown about. 1 “Neoliberal” is hardly ever used these days to point to instances of the elevation of market values above all others — it’s used to tar anything that has anything to do with any market realities whatsoever. Which, hello, United States, 2012. Welcome.

So to say, for instance, that the university-in-general is a neoliberal institution is to say precisely nothing. Name me one contemporary institution — seriously, an actual institution — that isn’t. Including every last one of us. None of us got to live in the places we live or study in the places we study or read on the freaking internet without market realities giving us the wherewithal to do so. 2

To say, on the other hand, that some universities are more beholden to market values than others — that some have made a value of the market, to the extent that they bear only the market in mind, and precious little else — and have therefore acquiesced all too willingly to the pressures of neoliberalism, actually might mean something. As it might to say that, for instance, having marketability as our only indicator of the value of scholarship or a scholar’s work represents a neoliberal corruption of the critical project in which we as scholars are ostensibly engaged. But that’s no longer how “neoliberal” is being used, at least in my hearing. It’s instead become a blanket term of dismissal, often aimed at institutions that do not have means of fixing the inequities by which we’re beset, inequities that are way larger than any university, even the university-in-general, can take on without serious support coming from somewhere.

So no more. “Neoliberal” is henceforth dead to me. I will take seriously no more casual statements that toss it around like popcorn, no further arguments that rely on it without any sense of specificity or grounding.

(And as for the tendency to associate anything that involves a computer automatically and of necessity with neoliberalism? Don’t even get me started. 3)

Moving On

I somewhat inadvertently made a big announcement via Twitter last night, and in so doing, as my friend Julie pointed out, sorta buried the lede. So here’s the story, a bit better presented:

Effective the end of this academic year, I’ve resigned my professorship at Pomona College.

This came out last night when @atrubek tweeted a query, seeking professors who’ve given up tenure. My friend @wynkenhimself mentioned me in response, but noted that she wasn’t sure if I was on leave or not. So I clarified.

In fact, I resigned a couple of weeks ago (the day Sandy hit, to be precise, but that’s another story), though I’d made the decision a while before. It wasn’t an easy choice to make — and it was even harder to communicate — but I’m convinced that it was the right one.

I was promoted to full professor at Pomona in Spring 2010, and so was pretty much set. I had amazing students, fabulous colleagues, a fantastic environment, and all the support I needed. I had a low teaching load in a dynamic, exciting department, and I was able to do roughly what I wanted within it. I could very easily have retired from Pomona, some years hence, and have been more or less perfectly happy.

But then this opportunity came along, to take the things I’d been imagining and help make them happen, to take the things I’d been trying on a small scale and test them out on a much, much larger one. That opportunity, however, required a bit of risk; risk somewhat carefully managed, to be sure — Pomona generously offered me a two-year leave of absence in order to test the waters — but risk nonetheless. I knew that at some moment, if I took this path, I was likely to have to consider what it would mean to give up tenure.

When the opportunity first floated my way about a year and a half ago, I found myself a bit overwhelmed by the implications of the choice. Tenure is, after all, the brass ring of academia, the thing so many of us put so much energy into getting — and I had it, right in my hand. Not just tenure, but a reasonably cushy, reasonably compensated full professorship at a top institution. And what I was being offered was extremely compelling, but it bore some significant risks. How do you even begin to ponder the pros-and-cons list that helps you make a decision like this?

I called a friend, who sat down with me and listened to my story. This was a friend who’d recently left the tenure track, for a very different set of reasons, but who nonetheless knew what it all meant. And I sketched out the possibilities for her, and asked where to start. What things should I be thinking about?

And she looked at me and said, “I don’t know. Do you want to change the world?”

And I think the choice was sealed, right there.

This is of course not to say that one can’t change the world from inside the protections of tenure. But I do think that those protections often encourage a certain kind of caution — certainly in the process of obtaining them, and frequently continuing long after — that works against the kinds of calculated risk that a chance like this requires.

So I’ve done the calculation, and I’ve taken the leap. And so far, it’s been absolutely exhilarating. Working with my wonderful colleagues at the MLA has been an important growth experience, and it continues to teach me new things. And it’s allowed me to focus on the aspects of my work that have always been about outreach. We’ve accomplished a lot here in the last year-plus, but there’s so much more left to be done; I’m thrilled to have the chance — and to be able to take the chance — to do it.

My somewhat subterranean announcement last night — or at least so I thought it — produced a pretty astonishing response, an overflow of cheers and congratulations rippling out from the friends who follow both me and @atrubek to others who saw those tweets and offered their good wishes, too. It was a lovely moment of confirmation that I’ve done the right thing, and a vote of confidence in a future filled with productive risk.

I will miss Pomona tremendously, and my many wonderful colleagues there. But I look forward to seeing what comes next — and honestly, not knowing may well be the best part.

Advice on Academic Blogging, Tweeting, Whatever

Over the weekend, something hashtagged as #twittergate was making the rounds among the tweeps. I haven’t dug into the full history (though Adeline storyfied it), but the debate has raised questions about a range of forms of conference reporting, and as a result, posts and columns both old and new exploring the risks and rewards of scholarly blogging have been making the rounds. Last night sometime, Adeline asked me what advice I have for junior faculty who get caught in conference blogging kerfuffles – which I take as standing in for a range of conflicts that can arise between those who are active users of various kinds of social media and those who are less familiar and less comfortable with the new modes of communicating.

This was far too big a question to take on in 140 characters, and I didn’t want to issue a knee-jerk response. I’m still piecing together my thoughts, so this post will no doubt evolve, either in the comments or in future posts. But here are a few initial thoughts:

1. Do not let dust-ups such as these stop you from blogging/tweeting/whatever. These modes of direct scholar-to-scholar communication are increasingly important, and if you’ve found community in them, you should work to maintain it. (And if you’re looking for better connections to the folks in your field or better visibility for your work and you aren’t using these channels, you should seriously consider them.)

2. Listen carefully to these debates, though, as they will tell you something important about your field and the folks in it. If there are folks on Twitter who are saying that they are less than comfortable with some of its uses, or if there are blog posts exploring the ups and downs of blogging, you might want to pay attention. There’s a lot to be learned from these points of tension in any community.

3. Use your blog/twitter/whatever professionally. This ought to be completely obvious, of course, but the key here is to really think through what professional use means in an academic context. In our more formal writing, we’re extremely careful to distinguish between our own arguments and the ideas of others — between our interpretation of what someone else has said and the conclusions that we go on to draw — and we have clear textual signals that mark those distinctions. Such distinctions can and should exist in social media as well: if you’re live-tweeting a presentation, you should attribute ideas to the speaker but simultaneously make clear that the tweets are your interpretation of what’s being said. The same for blogging. The point is that none of these channels are unmediated by human perspective. They’re not directly transmitting what the speaker is saying to a broader audience. And the possibilities for misunderstanding — is this something the speaker said, or your response to it? — are high. Bringing the same kinds of scrupulousness to blogging and tweeting that we bring to formal writing are is key. [Edited 12.55pm. Bad English professor!]

4. Make your tweets and blog posts your own. As I understand it, some of the concern about the tweeting and blogging of conference papers has to do with intellectual property concerns; does a blog post about a presentation undermine the claims of the speaker to the material? The answer is of course not, but if you want to avoid conflict around such IP issues, ensure that your posts focus on your carefully signalled responses to the talk, rather than on the text of the talk itself. This is the same mode in which we do all of our work — taking in and responding to the arguments of others — and it should be recognizable as such.

5. If somebody says they’d prefer not to be tweeted or blogged, respect that. Whatever your feelings about the value of openness — and openness ranks very high among my academic values — not everyone shares them. While I have a hard time imagining giving a talk that I didn’t wish more people could hear, I know there are other scholars who are less comfortable with the broadcast of in-process material. And while I might like to nudge them toward more openness, it’s neither my place nor is it worth the potential bad feeling to do so.

And finally:

6. Relax. People are going to freak out about the things they’re going to freak out about. If you’re working in a new field, or in alternative forms — if you’re really pushing at the boundaries of scholarly work in the ways that you should — somebody’s not going to like it. Always. The thing to do is to make your argument as professionally as you can, to demonstrate the value of the ways that you’re working — and then to get back to work. Doing your work well, and being able to show how your work is paying off, are the point.

That’s what I’ve got at the moment. What am I missing?

Two Things

One super-depressing (not least for how close to home it hits):

Imagine a small, developing country of perhaps 3 million people. Like many other small developing countries, our imaginary nation is rich in natural resources, its economy has prospered on the export of agricultural crops and benefited from the revenue generated by petroleum production, refining, and support services. Its history, like some of its counterparts in the developing world, reflects a constant structural economic weakness covered by a colorful culture, truly creative and charming people, and an often dramatic sequence of past events. Civil wars, civilian uprisings, and the failure to compete with more dynamic and successful nations have left our country with a small, wealthy, interbred, and interconnected elite, a growing entrepreneurial middle class, and a large much less prosperous population of rural residents and urban poor.

Riven by cultural conflicts generations old and struggling with an archaic political system, the country periodically falls into the hands of populist demagogues and petty tyrants. In between, often when prosperity strikes, the country’s significant group of responsible leaders seeks to enhance legal and institutional structures to improve its ability to attract and retain internationally competitive economic enterprises, but the periods of responsible leadership fade fast, and the nation reverts to a pattern of clientele government, backroom deals, and populist rhetoric.

Over all, its population remains significantly less educated relative to its peers in nearby nations, although a structure of incentives and subsidies support good education for the children of the growing middle class and the political and economic elite. Other groups of citizens struggle through underfunded and inadequate schools, and those who survive often find themselves excluded from post-secondary opportunities by weak academic preparation and high cost….

One which gives me hope:

We believe in the power of the Internet to foster innovation, research, and education. Requiring the published results of taxpayer-funded research to be posted on the Internet in human and machine readable form would provide access to patients and caregivers, students and their teachers, researchers, entrepreneurs, and other taxpayers who paid for the research. Expanding access would speed the research process and increase the return on our investment in scientific research…

Go sign onto the latter. And if you live in the “small, developing country” of the former, speak up, and prevent the populist demagogues and petty tyrants from undoing the programs and services its people deserve.

Open Access at 10

I’m really happy (if mildly tired) to be writing from Budapest, where (like Cameron) I’m honored to participate in a meeting on the tenth anniversary of the Budapest Open Access Initiative. It was this gathering, ten years ago, that gave a name to the growing sense that the content produced as a result of scholarly research can and should circulate freely in the age of the Internet. We’ve come together to discuss what’s been learned over the last decade, as well as the directions for the next decade.

As I told someone yesterday, I’m simultaneously surprised that it’s already been ten years and that it’s only been ten years; the discussions that took place in Budapest a decade ago have had such an impact that it seems at one and the same time as if their ideas have always been in circulation and as if they have only just been introduced.

Needless to say, it’s auspicious that this anniversary meeting is taking place at a moment of widespread discussion about the value of public access to the products of scholarly research. Personally, I’m also thrilled that this discussion broadly recognizes the value of such open circulation of the products of humanities research as well as that of the sciences, and that there is serious consideration being given to the particular challenges that different subsets of the academy face in the transition toward more open models of communication.

I’ll hope to report more thoughts as the meeting progresses, and will look forward to bringing what I learn back with me, as we continue thinking through the future of scholarly communication in the humanities.