Wednesday, December 10, 2014

Showing Up for Practice

I've never really been involved in school sports but I imagine that if you don't show up for practice you risk losing your place on the team. You are expected to say no to things that are also perfectly valuable because you have prioritized the sport. At practice, meanwhile, you do more or less as you're told. Much of it is just physical conditioning (running). Some of it is more technical (making a shot). Some of it teaches you broader strategic competences (how to move on the court). While I'm sure there'll always be some grumbling and belly-aching, and while some coaches are wiser than others in what they put their athletes through, the basic rule is that you show up and do the work. If you don't like it, there are other teams, other sports, even altogether different pursuits.

I think we could vastly improve higher education by insisting that students show up for daily writing practice. For an hour a day, first thing in the morning, students would show up and complete a series of mandatory writing tasks under the "exam conditions" I described yesterday. Many of them would consist simply in writing the best possible paragraph they can in 27 minutes, perhaps given a key sentence by the teacher/coach. Sometimes they'd be given less time to rewrite a paragraph. Sometimes they'd be asked to write a paragraph reflecting on a quotations, perhaps specifically requiring them to quote it or, alternatively, to paraphrase it. They would show up, complete the tasks, submit them, and their work would be quickly checked by a teaching assistant. The teacher would spot-check (perhaps sometimes guided by a concerned TA) and intervene in the writing development of especially weak or especially strong writers, just as a coach on a sports team corrects people who are making mistakes, pushes people who are capable of more, and lets (I'm assuming again) most of the team, most of the time, just go through the motions, which are valuable precisely because they are "exercises". The motion itself develops your talent.

I imagine this idea can be criticized as either an infantilization or a militarization of higher education. In whatever sense this criticism might hit its mark, consider my suggestion a "modest proposal", i.e., a satire of the massification and corporatization of our universities. It's a way of taking the idea that universities should "train" citizens for service to society seriously. I don't deny that at a certain point (and a very extreme one that my proposal doesn't directly imply, I will insist) such training is merely indoctrination, a preparation for a life in servitude. The same critique can be made of sports teams and scout troops at all levels. Ideally, university students would cultivate their own exquisite solitude, requiring merely a gentle, mentoring hand from their teachers and a context for ongoing conversation (a classroom). They would not need to be forced to sit down and struggle with their writing. A university education would be reserved for people with an intrinsic desire for knowledge, and it would be of no use to people who lack the curiosity and drive required. But that is not the reality; universities have become an obligatory passage point for the pursuit of a wide variety of professions, not all of which actually demand "academic" skills, but all which, for some reason, would prefer to employ people who have demonstrated a modicum of such competence.

So I'm not actually being ironic at all. In the early days of the universities students would sit in lecture halls and be read to by, yes, "readers" (lecturers) and their main job was to write down what was said. This is how books were made before the printing press. But it is also how a particular kind of mind, and a particular kind of mentality, was formed. It may, indeed, be how the peculiar inwardness of literary pleasure was originally invented. Maybe it's not for everyone. But surely there is nothing wrong with maintaining an institution that cultivates it? My proposal is just a way of introducing a bit of realism into the way we approach writing at universities. Surely, your performance as a student must demonstrate "academic" ability even if you have no desire to be a professor, just as your performance on the varsity basketball team must be "athletic" even if you have no long-term professional ambitions. Just as in sports, you'll have people coming out of this with a "merely" solid set of skills and their prose in "merely" good shape. But you are also providing a place for people of exceptional talent to excel, again just as in sports, eventually to pursue careers as professional writers, scholars, intellectuals.

P.S. I didn't do sports in school, but I was in the band for a while. Not only did we have band practice, we were also expected to practice at home. It's just so obvious in the case of music and sports! Why is it so hard to approach writing the same way?

Tuesday, December 09, 2014

Some Thoughts on Examination

If someone is learning how to play an instrument or how to draw, there is a straightforward way of testing them. You give them an object (some sheet music or a model) and ask them to represent it (to play it or draw it). The result may not be the most artistically interesting performance, but it will demonstrate a level of skill under the circumstances. You put something in front of them that you expect them to be able to represent (through a performance of the ability you've been trying to teach them) and then you watch them do it. Sending them home, and then letting them return with a finished drawing or a recording after, say, a week, would sort of miss the point. We now have to trust that it was in fact the student who produced the representation. And we wouldn't know under exactly what conditions it was produced in any case. There are too many ways of cheating if the process is kept out of sight.

I've been thinking about how this model might be applied in more bookish subjects. Wouldn't it be possible to examine the students' mastery of a sociological theory, or a historical period, or a literary corpus, by sitting them down in front of a computer for four hours with the task of writing, say, 8 individual paragraphs 27 minutes at a time? Or perhaps give them only 5 paragraphs to write. The first half hour is spent planning out their essay. They then submit one paragraph every half hour. Finally, they are given an hour to revise all five. They can be graded on both the individual paragraphs and the full composition, each of which shows something in particular.

By limiting the resources they can bring with them to the exam (a small set of paper books for example) it would be very easy to detect patchwriting and plagiarism. Their essays could be automatically run through a plagiarism checker comparing them against exactly the books they were allowed to bring with them. This would allow us to make an important concession to proponents of patchwriting: it would now be possible to stop treating it as a "crime". Even plagiarism could be treated simply as a poor scholarship. If you submit five paragraphs that are simply transcribed from the books you were allowed to bring with you, you don't get kicked out of school but you do get an F. Just as a pianist would if she didn't play the piece she had been assigned but openly played a CD of Glenn Gould's performance instead.

If this set-up were implemented, there would be absolutely no ambiguity about what they had learned to do during the semester. And it would be obvious to the students what they have to become good at. Now, you can give them all kinds of more "interesting" assignments throughout the year, and you can give them as much feedback on them as you like, including an indication of the sort grade they might receive ... if it counted. But this will work best if you don't let course work during the term contribute to the grade. It's just practice, training. You tell them how well they're doing, but you only, finally, judge their performance at the end.

Let's construct an easy example. Imagine a one-semester course on three of Shakespeare's tragedies: Macbeth, Hamlet, and Othello. The students are to bring the text of each play, and the collection of essays (perhaps a casebook) that was assigned in the course. At the start of the exam they are given a recognisable question, perhaps not quite as familiar (from the lectures) as "Why didn't Hamlet kill Claudius immediately?" but something like that—a question that reveals ignorance if its relevance is not immediately apparent to them. It's the sort of question that after a semester of Shakespearean tragedy they should have a good answer to. Not something they're supposed to be able to come up with an answer to, but actually have one for going into the examination. For each play, in the context of its particular set of interpretations (in the casebook), there will many different possible questions. The trick is that they don't know exactly what will be asked of them, nor of which play. All they can do to prepare is to understand the play and its interpretations. And

they can get their prose in shape. They know they will need to quickly and efficiently (in thirty minutes) plan out a five-paragraph essay. They will then have to compose five paragraphs in a row, a half hour at a time. (I've discussed the technical issues with the IT department at my university and it would be a simple matter to set up a computerised exam like this.) Then they'd have an hour to polish it. Students who are capable of a such a performance have acquired not just valuable knowledge about Shakespeare's tragedies, but also a set of writing skills that will serve them (if they keep them in shape) for the rest of their lives.

And such assignments would be easy to grade. You would be able to determine at a glance what the students are capable of, and how well they understand the play. As, Bs, Cs, and Ds would be very easy to assign. Fs would result from radically incomplete or ignorant attempts, or, like I say, plagiarism. In four hours a student would have been able to provide a completely unambiguous demonstration of their understanding of the course material. And given only a few minutes per assignment (time could be saved by grading one of out of the five paragraphs at random + the whole composition), a teacher would not only be able to painlessly complete the grading, but also get a good sense of how effective they are as teachers.

I'd love to hear what readers of this blog think of this idea. I really think this is how we should do things.

Monday, December 08, 2014

Writing for Publication vs. Writing as Inquiry

A recent comment to an older post asks a question I get very often and I think the answer is worth a post of its own. In "What to Do", I suggest a series of activities to keep you busy for 27-minutes, working on a single paragraph that says one well-defined thing (offering support for it or an elaboration of it). In the comments, Fides writes:

There's a big assumption in this - that you already know *exactly* what you know and what you want to say. Maybe in scientific disciplines that is the case... but that's not generalisable to *all* academic disciplines, in my experience. See for example Daniel Doherty's "writing as inquiry" - writing can also be a process of clarification. Your guidelines seem to assume that that process has already taken place - correct me if I'm wrong.

What Fides says is both entirely correct and a misunderstanding (a very common one, like I say) of what I'm suggesting. There is, of course, a kind of writing that constitutes inquiry. Scholars often find out what they really think about a subject by sitting down to write about it. Sometimes scholars conduct such inquiry very intentionally; they sit down with only a vague idea of what they're going to say and start "free writing" whatever comes into their head.

In addition to that kind of writing, however, there is a kind of writing that consists simply in writing down what you know. To practice (in both senses)* this kind of writing, you don't need to know exactly what you know, nor even exactly what you want to say, you just have to decide what you want to try for twenty-seven minutes to say in a single paragraph. I'm not saying there aren't any other kinds of writing. I'm drawing attention to a kind of writing that is, all too often, neglected, and which many writers would do well to work at a bit more deliberately. It is true that this kind of writing depends on the truth of the (second) assumption Fides asks about: that a "process [of clarification] has already taken place". But please grant that most of the knowledge you have has already passed through this process. Please grant that you are in possession of a great many justified, true beliefs in your area of expertise that are clear enough to you to write a single deliberate paragraph about if given twenty-seven minutes. It's the the ability to write those paragraphs, not the inquiry that provides their content, that I'm talking about.

Now, sometimes the line between "writing for publication" and "writing as inquiry" is blurred. Notice, however, that it can be blurred either intentionally or in the act. Sometimes, we sit down to free write and are surprised by how easily we end up producing perfectly publishable prose. Here, I would argue that we merely become aware that the "process of clarification" has already happened, even if we somehow missed it. (It may have happened while we slept, or during a conversation the importance of which we hadn't noticed until now.) Sometimes, we sit down to work on an article and are frustrated by how difficult it is to say what we thought we had already understood. Here the process of clarification had been assumed, but mistakenly so, and we will have to go back and do some more thinking, reading, talking, etc. In both cases, however, we have a definite intention that defines what kind of writing we're trying to do. And we simply find ourselves doing a different kind of writing, by accident. The trick is to minimise the frequency of this sort of event. Don't valorise it as what all writing is all about.

Writing shouldn't always be an unpredictable adventure into the unknown. It will, unpredictably, be this some of the time; but [to the extent that this happens] your writing process and research process [become] just that: unpredictable. By conflating "writing as inquiry" with "writing for publication" you are likely to undermine both processes. You are trying to accomplish with a file what should be done with a saw, or vice versa. This is true in all areas of inquiry. There is no academic discipline in which all writing is always also inquiry, though there are many scholars who have been made unhappy by thinking so.

*I.e., in the both in sense of doing it in a regular, orderly fashion, and in the sense of doing it for sake of improving your ability to do it.

Thursday, December 04, 2014


One thing that always makes me cringe when reading student papers is when they use my words, taking them from something I've written or something I've said in a lecture, but clearly don't understand what I meant, and so either turn it into a cliche or into nonsense. There's an implicit accusation in such "pastiche" (which, let's remember is supposed be a kind of homage), namely, "You told me to write like this! If you don't like it, blame yourself." This is true even at the formal level, of course, when a paper follows my guidelines to the letter and yet completely violates their spirit. The problem is that the student has tried to obey me, not understand me. Though I think it's rarely intended like this, it sometimes comes off as sarcasm.

In most cases, such students are doing what Rebecca Howard calls "patchwriting", something I've been talking about for a while now, even when I seem to be talking about something else. I'm trying to get clear about why I think it is wrong, and why I think it is poor advice, both for young writers and for their writing instructors, to allow it in the composition classroom. It should also be disallowed in the scholarly literature, of course, though I've come to see that this does not go without saying. I've heard people openly defend their borderline plagiarism as patchwriting.

I think that my disagreement with Howard is quite fundamental. It is about the very nature of language and writing. Inspired by Ludwig Wittgenstein and William Carlos Williams, I believe that language is the means by which we articulate our imagination, literally the means by which we join images to each other, the means by which we compose ourselves. In writing, this can be done very precisely, or rather, by writing we can make ourselves more precise, more articulate, even when speaking. "We make ourselves pictures of the facts," says Wittgenstein; that is, we imagine them. In Spring and All Williams correctly notes the profound importance of this process:

Sometimes I speak of imagination as a force, an electricity or a medium, a place. It is immaterial which: for whether it is the condition of a place or a dynamization its effect is the same: to free the world of fact from the impositions 'art' ... and to liberate the man to act in whatever direction his disposition leads.

His friend, Ezra Pound, made a similar point in his ABC of Reading, though with a focus of the polity not the individual:

Your legislator can't legislate for the public good, your commander can't command, your populace (if you be a democratic country) can't instruct its 'representatives', save by language.

In other words, language serves an important representational function, and what you want to be able to represent are the facts you know and the acts you master. You can't do this directly, however. Your language is not directly connected to either the facts of the world or the acts of history. What you represent in your words is, first of all, your imagination. You have to learn to speak your mind.

Patchwriters see language more in terms of performance than in terms of representation. Susan Blum has very good way of putting this, albeit one that I find disturbing in its implications. We can approach our students either as "authentic selves" or as "performance selves". That is, we can ask them to represent their own ideas, the ideas they "own", however inchoate, in their speech and writing, or we can ask them merely to perform for us, to use language in appropriate ways under appropriate circumstances. I suppose there's a middle ground, which is actually closer to my veiw, where we ask students to perform their ability to articulate their ideas, but Howard and Blum seem to be asking us to accept that students aren't even trying to be themselves, and while they are no doubt as interested in "power" as people have always been, they don't desire the "authority" that we old-school traditionalists once believed is the legitimate basis of power. So instead of helping them to discipline their imaginations in a way that also frees them to act in accordance with their dispositions, Howard and Blum, and the composition instructors that find their approach compelling, are encouraging students merely to channel power through language, to speak as they are told to speak, not to say what they think.

Using language in this way will, I fear, change it beyond recognition. Let's call the result "patchese". It will be full of what Sarah Palin once called "verbage", of which James Wood rightly noted: "It would be hard to find a better example of the ... disdain for words than that remarkable term, so close to garbage, so far from language."

Wednesday, December 03, 2014

Ambivalence in Scholarship

[Update: Tim Vogus has responded to this post.]

As a writing coach, my job is to help people establish and maintain a process that reliably produces publishable prose, and the authors I work with are famously at risk of "perishing" professionally if they don't succeed. In that sense, I guess I'm helping them to manage a "high-reliability organization", namely, their own writing process, which is in an important sense an "existential" issue for them. I encourage them to face the problem resolutely and unsentimentally, to establish dependable, predictable routines, to reduce the complexity of the problem to a level that is manageable from day to day, and to ensure that they plan their tasks so that they complement each other. I have even been known to suggest that authors cultivate a kind of zen-like "mindfulness" about their writing.

I was therefore a bit disconcerted to read the recommendations of four sensemaking scholars—Vogus, Rothman, Sutcliffe and Weick (2014)—to the effect that mindfulness in high-reliability organizations depends on avoiding "routinization" and on "designing jobs in complex and contradictory ways" with the aim of fostering "emotional ambivalence". It seems like the opposite of what I recommend. So, either I am wrong about how to organize the writing process, or I am wrong to think of scholarship on the model of a high-reliablity organization, or they are wrong about the value of emotional ambivalence. I lean towards to the latter.

But in regard to the question of whether scholars can learn from hospital administrators or flight controllers or, say, wildland firefighters, it is important to note that this is not an analogy I've invented. In an influential paper from 1996, Karl Weick suggested that, since firefighters sometimes die because they "drop their tools" too late when running away from a fire, scholars should also unburden themselves of their rigor in order to remain agile in the face of a rapidly changing world. (I should note here that Weick sometimes reaches the opposite conclusion, saying how important it is to hang onto your tools in order to maintain a sense of purpose.) On the face of it, I also think universities should feel beholden to high standards of reliability, even if failure is less dramatic than in the case of a nuclear power plant or a taxiing 747.

Yesterday, I pointed out that many academics, consciously or not, justifiably or not, actually feel abused by their administrators, who might precisely be said to foster, or appear to foster, a high degree of emotional ambivalence in the faculty. I think Andrew Gelman put his finger on something important in the Whitaker plagiarism case at Arizona State University by emphasizing, in the ASU website's description of President Michael Crow, the conflict between "academic excellence" and "societal impact". (Andrew, however, didn't quite share my sense of the tension between these two values in general. See the comments.) In my view, we can openly acknowledge the trade-off, and this involves no ambivalence at all. What Vogus, Rothman, Sutcliffe and Weick might be suggesting in such cases is precisely what ASU is doing, namely, "equivocating" (another favored term in sensemaking scholarship). Instead of saying, "Yes, Whitaker's work is academically shoddy, but we're weighing this against his strong commitment to social issues," the administration's line seems to be that Whitaker, in some vague and unspecified sense, "combines the highest levels of academic excellence, inclusiveness to a broad demographic, and maximum societal impact."

On a practical, day-to-day level, Whitaker's own actions might have been motivated by his attempt to think, as Vogus et al. suggest, in a "prosocial" way in the context of a job that has been explicitly designed to be both "complex" and "contradictory". As a result, his work (like Weick's, not incidentally) has come to be marred by plagiarism. I really do hope that hospitals and fire departments don't swallow this message too uncritically, though I'm afraid there is some evidence that the value of ambivalence is touted in all kinds of contexts. If you ask me, this is not a very "mindful" way to do your writing. And there is, unfortunately, also evidence that the writing processes of sensemaking scholars are ambivalent in precisely this way, and that brings us back to the problem of "patchwriting", which I'll take up in the posts to come.

[Continues here with Tim Vogus' response]

Tuesday, December 02, 2014

High-Reliability Scholarship

[Update: Tim Vogus has responded to this post.]

Since science is a human activity, all human sciences are implicitly self-referential. We can ask a finance professor how her research is financed. We can ask an English professor how he gets his writing done. We can, famously, ask doctors to heal themselves. And we can ask organization theorists how they organize their work. Do the principles of organization that they come up with and pass on to their students also govern the work they do? And how does it work for them?

Consider the field of sensemaking scholarship. At a general level, we can ask whether [and how] our "interpretation" of organizational life itself exhibits the "seven properties of sensemaking" (Weick 1995). More specifically, we can recall that much sensemaking scholarship emerges from the study of so-called "high-reliability organizations" (HROs, see espcially Weick and Sutcliffe 2006), and this can help us give a specific meaning to the question of whether our research is "reliable".

For example, Weick and Sutcliffe recently published (along with Vogus and Rothman) a piece that argues for the benefits of what they call "emotional ambivalence", which, they suggest, fosters increased "mindfulness". Here's a specific recommendation that gave me pause:

Designing jobs in complex and contradictory ways can create the tension fueling emotional ambivalence. ... Although these job designs hold promise for HROs, they also potentially benefit any organization wherever work is complex and operational reliability critical. (Vogus et al. 2014: 595)

Now, sensemaking scholarship is often praised for its "counterintuitive" insights. But this is a bit extreme to my mind. Do we really want to work in organisations led by people who intentionally design our task to be so complicated and contradictory that we experience emotional ambivalence? Even if the goal (mindfulness) could be reached like this, which I highly doubt, would that end justify the means?

Consider a recent piece by Philip Guo in Inside Higher Ed about "why academics feel overworked". His answer goes as follows:

I think the answer lies in the fact that, as an academic, your work comes from multiple independent sources. One claimed benefit of being a PI-level academic (e.g., a research scientist or tenure-track professor) is that you don't have a boss. However, without a boss to serve as a single centralized source of work, academics end up taking work requests from multiple independent sources that have no knowledge of one another.

A sensemaking scholar reading this might conclude that universities are well-designed "high-reliability organizations".

Suppose we learned that Wall Street has been explicitly following Weick's suggestion that "any old map will do" since the mid-nineteen eighties. (I've argued that this is possible.) That would shed light not just on modern finance, but also on contemporary organization theory. But now suppose we discover the field of sensemaking scholarship, too, has been following Weick's advice. For example, suppose that in the early 1980s it adopted the slogan "any old story will do" and sometime in the mid-1990s it went ahead and "dropped its tools". Or, as I've suggested here, suppose we discover that higher education has been organized specifically to foster and maintain a continuous state of "emotional ambivalence" by "designing jobs in complex and contradictory ways". Sort of makes you go "Hmmmm", right?

[Continues here]

Monday, December 01, 2014

How to Make a Picture of a Fact

One of my favourite books is a little manual by Oliver Senior called How to Draw Hands. He begins with a simple assumption, namely, that you always have a model, ahem, at hand. That is, if you want to learn how to overcome the "notorious difficulty" of drawing hands, there's nothing for you but looking at your hand and trying to draw it. Senior, whose own illustrations in the book amply demonstrate his mastery, can then help you along by getting you to notice particular aspects, and suggesting exercises for you to practice. He emphasizes, as I do in the case of writing, that you're not going to learn how to draw hands simply from reading his book. You are, precisely, going to have to practice. If you do, you're likely to improve.

Wittgenstein famously said that "we make ourselves pictures of the facts." I've been trying at this blog to increase his fame in this regard. I think it is a profound statement about what we do when we come to know things. After all, it is one thing to be able to recognize a hand when you see one and quite another to be able to draw one accurately. Indeed, as Senior notes, we're all willing to play along when an artist, having run into his limitations, gives us, at the end of what is obviously an arm, what looks more like a bent fork or a bunch of bananas for us to interpret as a hand. In a similar way, we are often inclined to "get" the meaning of a piece of writing even when it is not very competently written. We know what it is trying to say.

I think we can all agree that learning how to draw your own hand also teaches you how to draw anyone else's hand. If you spend, say, 30 minutes every day for a month drawing a fresh sketch of your hand in various positions, then you'll be in much better shape to draw a picture of mine than if you hadn't practiced. Your mind will be, as Senior puts it, "better informed" by your practice sketches of your own hand. Even if you've never before taken a very close look at my hand and even if mine happens to be in a position you've never seen your own hand in before.

I want to apply this insight to writing by suggesting, first, that you know a lot of facts. Each of these can provide you with a "model" to study. There are many different kinds of fact, of course. It may be a fact, for example, that Michel Foucault worked out a theory of neoliberal discourse. It is certainly a fact that the world economic system faced a financial crisis in 2008. And it is a fact that Bernie Madoff went to jail. It may be a fact that you have closely studied the coverage of Madoff's fall from grace. It may, finally, be a fact that the reception of Madoff's confession was shaped by the reigning neoliberal discourse of public risk and personal responsibility. Your analysis may have shown this (I'd like to see that analysis, actually.) All these facts may be known to you in a detailed way. Some of them may be less known or altogether unknown to your peers. In any case, you'll want to be able to write them down.

You want to be able to make "pictures" of such facts. Consider, again, a picture of a hand. There's a difference between merely recognizing a drawing as a hand and learning something interesting about the hand from the picture. Is it the hand of a child? Is it injured? Is this fist clenched in anger? Are these fingers holding something fragile? Are these two hands engaged in a handshake or an arm wrestle? It's one thing to get four fingers, a palm, and thumb down on the page. It's quite another to indicate the wear and tear of a long life or some temporary dirt under a fingernail. People spend a lifetime perfecting their ability to draw such things. As Senior says, the problem is that of representation within the limits set by the two dimensional surface of the paper.

Consider, then, the terms of the problem posed by a paragraph. Given at least six sentences and at most 200 words, how will you represent the fact that Bernie Madoff went to jail? How will you represent what he did? How he got caught? How many paragraphs do you need to explain how the press covered the case? How many facts are involved? Remember that each paragraph will state a series of facts (usually at least six) but these will add up to one larger fact (stated in the key sentence). Literary pleasure is all about passing from a sequence of words on the page to an clear image in the mind. It does not have to be a visual image. It just has to be an arrangement of things into facts. A paragraph is a picture of a fact.

Friday, November 28, 2014

Where Is Your Knowledge?

If you know something, I always say, you can compose a coherent prose paragraph about it in 27 minutes. This sometimes starts a discussion about what you are allowed to bring with you into that 27-minute session, and sometimes a discussion about what sort of preparation is required. These two issues are of course related.

Let's start with preparation. On my rules, you are not allowed to do any "preparation" for your planned writing sessions. That may sound odd, so let me explain. A "planned" writing session is one that you have decided on at the latest the day before. At the end of the working day, you have decided when, where and what you will write (also, by implication, why and for whom you will write). At 5:00 pm today, for example, you might decide to write a paragraph between 8:00 and 8:27 am on Monday, in your office, that will say that Virginia Woolf thought novels communicate "the loneliness which is the truth about things". (Or, merely inspired by Woolf, and unsure what she herself believed, you may have decided to write a paragraph that says that novels communicate what Virginia Woolf called "the loneliness which is the truth about things". Do note the difference between those two tasks.) You have thereby chosen to write down something you know, first thing Monday morning. You have not decided to learn something by Monday morning. You have decided that you already know it, that you are, in that sense, already prepared.

It's like deciding you'll go for a 5k run on Monday. You're not going to spend the weekend getting into shape for it. If that is necessary, you should decide on a shorter run. Seems simple and obvious in the case of jogging, but it needs to be said in the case of writing.

What, then, does it mean to know something at the end of the day on Friday well enough to be able to write comfortably about it in your office on Monday morning? Notice that the place you will be sitting is part of the decision to write. In this case, you are predicting that you will know what novels do (or what Woolf thought novels do) in your office on Monday morning. What difference could the location make? Well, you may have a number of books in that office. I encourage you not to open them, however, unless you've clearly marked the pages you will be needing, so as not spend most of the 27-minutes searching through them or even succumbing to the temptation to actually read them. More usefully, your office contains the notes you have from your reading, and you can select the relevant pages from your notes, and lay them out beside your computer (or whatever you write on) before you leave the office for the weekend. With those notes at hand, then, you will know what you are talking about come Monday.

What this shows is that knowledge is not something you have in your head. Knowing something is a relationship you establish between, on the one hand, you memory, your habits, your imagination, even your hands, and, on the other, your notes, your books, your university's library, your data, and the vast complexity of the real world that it represents. When you know something you may not be able to quote your source verbatim, but you know exactly where to find it. (These days, of course, you may know only what search terms you can plug into Google to lead you directly to the source. There's something unseemly about this to me, but that may just be an indication of my age.) There is certainly a component of your knowledge in your head, in fact, in your whole body, (and it remains important to test our students for the presence of this component) but it does not suffice without the network of support that knowing something implies.

Still, my test remains those 27 minutes. If you can't decide in advance to write something down, and arrange a set of circumstances under which such writing can reliably happen, then you simply don't "know" what you are talking about. You may be very close. You may almost have learned it. But until you know how to set up a situation that lets you compose a coherent prose paragraph of least six sentences and at most 200 words in 27 minutes you have not reached that particular state of competence scholars valorize as "knowing". Keep at it. And later today, just choose something else to write about when you get in on Monday morning.

Wednesday, November 26, 2014

Normal Distributions

I think it's Albert Camus who said that we often underestimate the effort people make to be normal. Though I'm not sure it's a misinterpretation of what Camus himself intended, I think it's unwise to let the truth of this statement lead us to abandon "normativity" in the sense in which this notion is used in identity studies. I suppose I risk being called conservative, and will no doubt be asked to "check my privilege", but Andrew Gelman's post on grade inflation got me thinking about the impossible burden of identity work in a world without norms. Let's leave aside the important issues around race, gender, class, and sexual orientation, and consider just the question of academic achievement.

One of the things you're supposed to discover about yourself at university is whether or not you're inclined towards research and, of course, whether you have an aptitude for it. Obviously, not everyone is cut out for a professorship, and that's no shame on anyone. People go through years and years of schooling and then, at some point, many of them leave school to go into business, or politics, or entertainment, or gainful unemployment. It makes sense to have "elite" schools, like Princeton, where exceptionally high-achieving high-school students go to get a(n even) higher education. But once there, it would be really surprising if all them turn out have the intelligence and curiosity to impress "academically". It also makes sense to have less elite universities, where people who didn't do quite as well in high-school can go and, again, try to impress their academic teachers. This creates a career path for straight-A high school students through an Ivy League BA, to, say, a top law school and into the legal profession, but also a path for a B-student in high school, through a less selective state university, a master's degree somewhat higher up the ladder and, finally, a PhD at Princeton. That's because what it takes to succeed in academia isn't exactly the same thing as a what it takes to succeed in high school. You've probably seen my point coming: different norms apply.

I'm focusing on academic outcomes here, but they are of course affected by extracurricular distractions. The important thing is to have a system that actually registers the students' relative success at meeting the specifically academic standard at a particular point of their life path. At some point, the student runs into a limitation. Having received easy As in math all her life, she suddenly finds herself getting Bs in advanced statistics. This should not be a tragedy for her; she's just learning what she's good at. Having struggled for his Cs in high school English, he suddenly discovers he's able to earn As in philosophy. This isn't an indictment of high-school English. It's just, again, an exposure to a different set of norms.

What about the curve? I don't think there's anything wrong with the idea of meaningfully graduating at the "top of your class", i.e., of letting academic achievement be relative to your cohort, not some Platonic ideal grasp of a subject matter. And most people in most classes really should be satisfied with the Bs and Cs that are available to them after all the well-deserved As have been given out to people with abnormal intelligence or curiosity, and the well-deserved Ds and Fs have been assigned to those who need to find other things to do (or learn to show up to the courses they have enrolled in).

My point is that there are enough different kinds of "normal" out there for everyone to be normal in some ways, exceptional in others. By refusing to articulate clear, even pedantically clear, standards for "academic" work in higher education out of a "respect for difference", i.e., by refusing to mark out a space of perfectly respectable "normal" achievement (Bs and Cs), as well as a range of high and low achievement (As and Ds), we are robbing students of the opportunity to find out exactly where and how they are normal. Sure, some will still make the tragic effort to be normal (or brilliant) in an area they are simply not normal (or brilliant) in. They may be trying to impress their parents, for example, or embarrass them. The truly sad cases are of course those who pretend to be average where they are really brilliant.

Camus' insight is important, finally, because any effort we make risks being wasted. There should be vast regions of normalcy out there that most people, in most of their activities, can enjoy effortlessly. Being yourself should by and large ... on the whole and in the long run, on average, however you want to put it ... be easy. Our opposition to normalcy is really a demand for uniqueness. We are asking everyone to be unique in every way. And we then ask our already beleaguered faculty to grade these singularities by way of an assessment of the "whole person". Can't we see how impossible we're making things for ourselves? Just assign those damn 5-paragraph essays. Tell the students there are such things a good and bad writing, greater or lesser ignorance. Then spend the five or ten minutes per paper it will take to distribute their efforts under a normal curve. These "whole people" will be fine knowing only how well they did relative to each other in the art of composing themselves into five, more or less coherent, more or less intelligent, more or less knowledgeable, paragraphs.

Monday, November 24, 2014

Originality, Plagiarism and Pierre Menard, Part 2

Jonathan sets me straight. At least partly. On my reading, Pierre Menard neither "re-wrote Don Quixote without ever having read it" nor "transcribed" it through some unknown process that rendered it an "original" composition of his own. Both ideas are belied by Borges' text. "When I was twelve or thirteen years old I read it," writes Menard in his letter to the narrator, "perhaps in its entirety. Since then I've reread several chapters attentively." Our narrator also tells us that Menard's "aim was never to produce a mechanical transcription of the original; he did not propose to copy it." Jonathan at one point suggests Menard "reproduces or 'transcribes' it through an unexplained science-fictiony device" or alternatively (and I think more plausibly) "memorize[s] sections of it and then sit down to write, but never writing down something unless he felt it as his own". Jonathan emphasises that honesty is the key to this, since in one sense what he is doing is in fact transcribing: he is "writing across" from one text to his own. It's only when he has actually appropriated the words, so that they are no longer Cervantes' but his own, that his project has succeeded. The standards by which one can evaluate this process are of course unknown.

I'm still not convinced this is exactly what Borges, Menard or the fictional literary critic had in mind. I'm entirely willing to play at being "more Borgesian than Borges" as Jonathan suggests, of course. But I need to square my understanding of the text with, especially, this description of Menard's process, provided in that same letter to the narrator:

My [Menard's] general memory of Don Quixote [from his reading], simplified by forgetfulness and indifference, is much the same as the imprecise, anterior image of a book not yet written. Once this image (which no one can deny me in good faith) has been postulated, my problems are undeniably considerably more difficult than those which Cervantes faced. My affable precursor did not refuse the collaboration of fate; he went along composing his immortal work a little a la diable, swept along by inertias of language and invention. I have contracted the mysterious duty of reconstructing literally his spontaneous work. My solitary game is governed by two polar laws. The first permits me to attempt variants of a formal and psychological nature; the second obliges me to sacrifice them to the 'original' text and irrefutably to rationalize this annihilation."

Here the suggestion is that he'll work with his memory of the story, not his memory of the the text, which he insists is as imperfect as a novelist's image of a book he's not yet written. It's out of that imaginary that he will attempt to produce a text that is identical to Cervantes'. The claim is that he succeeded in writing two chapters and part of another.

I'm being pedantic mainly for the sake of making this clear to myself. And also because something Jonathan said reminded me of another remark of Borges' in his "Note on (towards) Bernard Shaw". "The heroic aspect of the feat," says Jonathan, "[is] bridging the distance between the two sensibilities without ever cheating. The exact mechanism ... is deliberately obscure since what matters is the negotiation between the two subjectivities." In his "Note", Borges dismisses a series of literary "devices"—Lully's metaphysical discs, Mill's worry about the exhaustion of the possibilities of music, Lasswitz's "total library" (which Borges successful made his own)—because they turned the problem into "a kind of play with combinations". I think Susan Blum's "folk anthropologists" are in the same category. "Those who practice this game," says Borges, "forget that a book is more than a verbal structure or series of verbal structures; it is the dialog it establishes with its reader and the intonation it imposes upon his voice and the changing and durable images it leaves in his memory." I think we have to remember that Menard was not trying to do something like those patchwriters who want to know what the minimum amount of changes you have to make to a text is if you want to turn it into paraphrase. He was, as Jonathan says, attempting a "negotiation between two subjectivities" in the most difficult terrain imaginable, i.e., in the mental space that differentiates the meaning of two identical texts.

Sunday, November 23, 2014

Originality, Plagiarism and Pierre Menard

A recent post of Jonathan Mayhew's reminded me of an old complaint I have about the blurbs on my Penguin paperbacks. My 1981 King Penguin edition of Borges' Labyrinths describes Pierre Menard as "the man who re-wrote Don Quixote word for word without ever reading the original" on the back cover. (This sort of thing happens a lot, I've found. I wonder if it's a convention I've never been told about. Perhaps blurbs are supposed to be misleading so as not to ruin the plot?) In any case, my reading of "Pierre Menard" doesn't have him doing any "transcribing", as Jonathan seems to say. In fact, I thought the opposite was true.

Pierre Menard, as I read Borges, was trying to write Cervantes' Don Quixote without plagiarizing it. The task seems to be an impossible one; indeed, it seems absurd. Menard intends to write the exact same words as Cervantes, but he, Menard, is now to be their author. As Borges's fictional literary critic points out, the words will be the same, but their meaning will be entirely different. Menard wanted to, literally, write Don Quixote.

How can you become the author of a book that has already been written? We can imagine a parallel universe in which, as in ours, Cervantes writes the Quixote in the early seventeenth century but, unlike ours, does not publish it, and does not achieve the fame he enjoys here. Then, four-hundred years later, Menard discovers the manuscript and publishes it as an original creation of his own mind. This would of course still make him a plagiarist, but it would be very difficult to discover (if he kept his own secret). Menard would now become the author, and, if he really did present it as something he had just written, his words would be interpreted as those of a contemporary.

Though it is hugely unlikely, we could also imagine another universe in which Menard, in a true coincidence, produces a work that is identical to Cervantes' unpublished manuscript, exactly as Penguin's blurb writer suggests. In this parallel universe, then, two people write the same manuscript independently, they both spring from ("originate" in) the imagination of each unique author. This, interestingly enough, is the sort of "impossible originality" that I've argued we demand of students. We want them to "come up with" ideas that are in most cases already available in the published literature they just haven't read yet.

But these are not the universes that Borges would have us imagine. Menard desires a universe in which Cervantes wrote and published Don Quixote and in which Menard, fully aware of Cervantes' achievement, could also write and publish the same sequence of words, but in his own name, and, like I say, without plagiarizing them. As Borges and Menard are aware, this requires Menard to forget Cervantes' version. The odds against Menard's project are formidable*: the odds of writing the Quixote without plagiarizing it are exactly the odds of writing an exact copy of any book that one has never read. In our parallel universe we need only posit that Menard does not actually discover Cervantes' manuscript. Rather, someone else discovers it after Menard has become famous (if writing an original Quixote in 19051935 warrants literary fame). I suppose there would be a scandal. No one would believe Menard had not transcribed Cervantes.

And that's what happens when we find that a student who has, as expected, submitted an "unoriginal" idea in an essay, has also, as expected not to, used the exact same words as, either another student, or an academic blogger, or published scholar. We would not be entirely surprised to find a sophomore English major propose that Nick Caraway was gay. But we would raise an eyebrow if the student wrote "It’s a testament to Fitzgerald’s talent as a novelist that he was able to provide so much textual evidence that Nick is gay without confirming it or drawing undue attention to it. Subtlety is an art." Here a set of quotation marks and a reference to Greg Olear, not to mention an ellipsis, would, of course, be expected.

*Perhaps this is why Andrew Gelman is so passionate about plagiarism. The excuses are so often an affront to probability theory.

Wednesday, November 19, 2014

A Revision of Solitude?

According to Susan Blum, academia is beholden to an "eighteenth century model of the self and author [which assumes] a singularity and essence that [is] fixed, unchanging and in some ways isolated (unaffected by others’ influence)." But she and her fellow anthropologists have been questioning these assumptions, noting that recent technological developments render them obsolete and should have us rethinking our basic approach to higher education. Shaped by social media, "our students have become folk anthropologists, speaking out about the impossibility of singularity, the shared quality of discourse, the reality of fragments of texts incorporated into every utterance (or written document) and the collective nature of cultural creation." As Jonathan Mayhew has pointed out, this sort of thing has become pretty orthodox in the social sciences, travelling under the banner of "postmodernism". He's not exactly impressed.

As I was reading Jonathan's post, a remark about Rosmarie Waldrop's use of the "I" in her introduction to Curves to the Apple came to mind. "This 'I'," she says, "has lately been confused with the expression of unquestioned subjectivity and identity. But it simply indicates that language is taking place." She doesn't say who "has lately been confused", but it may well be those anthropologists and their students, who think that demanding "originality" of authors is tantamount to requiring them to be "geniuses". Now, Waldrop is a poet and her remarks resonate nicely with those of another poet, Tony Tost. He also doesn't say exactly who he has in mind*, but he seems to be correcting a common misconception when he says, "One is not condemned to a perpetual present, nor to the immediacy of seemingly random, unconnected signifiers. In summary, one is here because one has remembered to be here. In conversation, one discusses what rises" (Invisible Bride, p. 46). There's something distinctly postmodern about the "immediacy" he rejects. But, like Waldrop, he suggests that we should just keep talking. Perhaps it's just language.

Allen Grossman, a poet of Waldrop's generation (b. 1932) who recently died, also seems to hold an "eighteenth century" notion of the "in some ways isolated" self. Explicitly so, in fact: he invokes Descartes, the godfather of the "isolated subject". In his postscript to Descartes' Loneliness he tells us that "We, each one of us alone, think in our solitude about our own mind and about the world, in language—and each finds out thought about the self, about other persons and their claims upon the self, speaking and answering by means of language." There it is again—language. Grossman, if I recall, is one of Tost's influences, and perhaps we see a bit of it on the same page I already quoted: "Talking becomes a conscious stammering not in one's language, but in how one thinks," he writes; "a conversation represents not so much a break with solitude, but a newer form of solitude, a revision of the logic of solitude."

I became aware of Tost's work back in 2003, when I read a poem that, interestingly enough, was made by patching together materials found on the Internet by searching for variations on the phrase that constitutes the title, "I Am Not the Pilot". It had a profoundly liberating effect on me. The poet, as I've noted elsewhere, is rejecting the sort of "competence" that is demanded of him, and is performing that rejection precisely by plagiarising every word of the poem. (This "Google sculpting" has since become the hallmark of so-called "Flarf" poetry.) I have never held this against him. He remains my favourite living poet.

I'll continue this soon. There's an obvious tension here between the poetic sense of self and language and the anthropological one. At the same time, Tost's "I Am Not the Pilot" is perhaps a sign of a "revision of the logic of solitude", the logic that is characterized by Grossman as "Descartes' loneliness". That revision may be, as Blum suggests, driven by technology. That doesn't mean that we are, to use Tost's word, "condemned" to lose ourselves. What was it William Carlos Williams said? "When I am alone I am happy."

[Update: I just googled his phrasing to find his source. It can be found on page 93 in Pamela Odih and David Knights' "Just in time?", in the anthology Knowledge, Space, Economy, edited by John Bryson, Peter Daniels, Nick Henry, and Jane Pollard (Taylor & Francis, 2000.) It turns out Tony has patch-written this line! This is not surprising given what we know about "I Am Not the Pilot".]

Monday, November 17, 2014

"Originality is Impossible"

One of the most interesting professional tensions that I experience in my work as a coach is the resistance of anthropologists to my ideas about the writing process. So I guess I shouldn't be surprised to find myself in a disagreement with an anthropologist about the nature of authorship itself.

Susan Blum has provided a useful summary of her distinction between the "authentic self" and the "peformance self" in the American Anthropological Association's Anthropology News (March 2008). Two things stand out for me. First, she casts the "anthropological" notion of self as a foil for the traditional "academic" sense of self. That is, she suggests that there is a tension between what professional anthropologists know to be true about the self and what academics in general presume about it. Second, and more worryingly, she believes that students, unlike their university teachers, are in possession of this anthropological truth about themselves. That is, the students, qua "folk anthropologists", are right.

In defending the "academic", "authentic" self let me begin with what I think is a common misconception among patchwriters about originality. Here's one of Blum's subjects, i.e., a student she talked to during her fieldwork:

Ideas are gonna get recycled. There’s no way that a hundred kids in a class could write papers with all fresh ideas. That’d be a hell of a class if you could. In fact, I’d be willing to say that no one—not even one student—will come up with something that’s never been come up with before. And that’s not an indictment of them, it’s just these ideas are all over the place.

Now, academics know this as well as any student. When teachers ask students to submit "original" work, they are not asking them to "come up with something that’s never been come up with before", they are merely asking them to submit for evaluation their own ideas, i.e., ideas that, whether actually "original" or not (in the hyperbolic sense invoked by the student), are ones they actually "came up with". They will have arrived at these ideas on the basis of their reading, and it's therefore important for the student to properly reference the reading they have done, leading up to the part that they came up with themselves, so that the teacher can assess their abilities and give them a grade. Now, if they pass off some part of their reading as their own ideas they are plagiarizing, cheating. The are pretending they came up with something themselves that they just read in a book. But the fact that the teacher already knows what the student has "discovered" is not in and of itself a strike against the student. It's only a problem if the student hides the source.

Originality in the sense of something "new under the sun" is of course very rare. But it is possible to distinguish clearly between what you have learned from reading and what you have thought out yourself. This is very important in school, where almost all of what you learn is already known to others. But it remains important in a research career where "originality" in the strong sense of making that rare "novel" contribution, depends on knowing what is already known.

Wednesday, November 12, 2014

Against Patchwriting

I've decided to confront the issue head-on, if only for the sake of clarity. So I'll just announce straight off that I am against patchwriting. I use that term in the sense coined by Rebecca Moore Howard: "copying from a source text and deleting some words, altering grammatical structures, or plugging in one synonym for another" (Howard 1999: p. xviii). And when I say I'm against it I mean that I refuse to "celebrate" it as some writing instructors do:

Describing the textual strategies of Tanya, a student who in traditional pedagogy might be labeled "remedial," Glynda Hull and Mike Rose celebrate her patchwriting as a valuable stage toward becoming an authoritative academic writer: "we depend upon membership in a community for our language, our voices, our very arguments. We forget that we, like Tanya, continually appropriate each other's language to establish group membership, to grow, and to define ourselves in new ways, and that such appropriation is a fundamental part of language use, even as the appearance of our texts belies it" (152).

These and other studies describe patchwriting as a pedagogical opportunity, not a juridical problem. They recommend that teachers treat it as an important transitional strategy in the student's progress toward membership in a discourse community. To treat it negatively, as a "problem" to be "cured" or punished, would be to undermine its positive intellectual value, thereby obstructing rather than facilitating the learning process. (Howard 1995: 788-9)

I believe, in short, that patchwriting is a problem that should be addressed, even a disease that should be "cured", and in some cases a crime that should be "punished". Though I don't think it really is a "punishment", one simple technique here is to ensure that patchwritten work receives a lower grade. But this is where the "criminal element" comes in, because, like classical plagiarism, it is often not immediately apparent on the surface of the text. The first problem with patchwriting, like other kinds of plagiarism, is that it must be detected. Patchwriting conceals the relationship between one's own writing and the writing of others, and that alone should dampen any possible "celebration" of the student's accomplishment in this art.

The toleration—and encouragement, if that's what "celebrating" can be taken to imply—seems to be founded on a fundamental misunderstanding about scholarly writing, which is clearly on display in the passage I've quoted. It is simply not true that "we forget that we ... continually appropriate each other's language to establish group membership". Good scholars are constantly mindful of these acts of appropriation and therefore continually acknowledge their sources. There are acceptable ways of appropriating the work of others, namely, through paraphrase and quotation, always with adequate citation. There is no mystery (though there are of course a few subtleties) about how this is done, nor when it is done right.

I'll be writing about this in the weeks to come, mainly as a way of reflecting on the work of Rebecca Howard and Susan Blum, both of whom I've written about before. Like I say, I'm going to be taking a hard line on this, mainly in the interest of being clear. Let there be no doubt that I think patchwriting is a problem, and one we need to do something about. It is no more "an important transitional strategy" toward mastery of scholarly writing than any other form of plagiarism, nor does it have "positive intellectual value". True, like plagiarism in general, it does offer a "pedagogical opportunity", or what we also sometimes call a "teachable moment", but only in the sense that it provides an occasion to talk about intellectual honesty. Patchwriters are faking their linguistic competence, and they must be told that that is what they are doing, and that that is the opinion competent scholars form of them when they discover the real source of their language.

It's not, I should add, just a problem among students.

Update: it's not a coincidence that I'm returning to this subject today. Andrew Gelman had warned us that a post about this was "on deck" today. And sure enough: here it is.

Friday, November 07, 2014

What are the implications of a theory paper?

Two years ago, thinking myself wittily obvious, I said that theory papers "accomplish their theoretical aims by purely theoretical means". Yesterday, talking to a PhD student about her theory paper, I found myself saying, perhaps, the opposite. Theory papers, I said, do not have theoretical implications; only empirical papers can truly have "implications for theory". Just because you've thought about something, I said, your peers don't necessarily have to change their minds. That would require some actual, empirical results—a tested theory.

Now, in one sense, that's not really true, of course. When you write a theory paper, you are actually trying to affect the minds of your readers. You're trying to get them to see the world differently, to expect different things to appear under particular circumstances. Rather than showing them such things under such circumstances, as you would in an empirical paper, you confront them with aspects of the available literature that they are unfamiliar with or, perhaps, have just forgotten about. Once those writings, or your particular reading of them, is presented to them, you presume, they will come to expect familiar objects to behave in hitherto unthought-of ways.

If you write your theory paper very convincingly you can accomplish this goal—of changing someone's expectations about an object of inquiry—without any new empirical evidence. At the very least, you can shake the reader's confidence in currently held assumptions about how the object behaves in practice. So was I simply misleading that PhD student when I said a theory paper doesn't have theoretical implications?

Not quite. I was making a formal point about the rhetoric of theory papers. The section that corresponds to the "implications" section of an empirical paper has a particular rhetorical goal, namely, to make explicit what "follows" (logically, rationally) from the rest of the paper. Since the whole paper is about theory, the "analysis" will already have established how the theory must change. It will not just have provided premises from which draw "theoretical" conclusions; it will have presented a complete theoretical argument, conclusions and all, just as an empirical paper will draw empirical conclusions already in the analysis (or "results" section), from which (again, in the empirical paper) either "practical" or "theoretical" implications will then follow.

Just as the implications of an empirical paper reach beyond the empirical material itself (into theory and/or practice), so too must the implications of a theory paper reach beyond the purely theoretical arguments the paper makes. As I said two years ago, and again two days ago, these implications will often be methodological. That is, if you convince your reader to expect something different of the object of their research, this will, probably, have consequences for how they do that research. If you convince them to see the world differently, they'll probably begin to do things differently. Minimally, it suggests doing a study to find out if you're right.

A theory paper may also have "meta-theoretical" implications, or what can properly be called epistemological implications. That is, a reflection upon theory qua theory may lead us to rethink what knowledge is or at least what kind of knowledge we produce. Thus, the choice between "theoretical" and "practical" implications in an empirical paper is transformed into a choice between "epistemological" and "methodological" implications in a theory paper one. (Imagine the permutations for a methods paper!)

To sum up then: a theory paper does make a theoretical contribution but it does not, formally speaking, have theoretical implications.

Wednesday, November 05, 2014

Theoretical and Conceptual Papers

I originally proposed my forty-paragraph outline as a guide for the writing of what I call "the standard social science paper". This is the kind of paper that presents the result of an empirical study, framed by a familiar theory, guided by an accepted methodology, and with definite implications for theory or practice. I was recently asked about theoretical papers and, since I get this question often, I was sure that I could just point to a post on this blog that answered it. It wasn't quite as easy as I thought (though there is this post), and I thought the best solution would be to just write a fresh post on the subject.

What I will be offering here is not a normative guideline for what a theory paper should accomplish, of course. I'll leave that to the major theorists, especially those who serve as the editors of the journals that publish such papers. Instead, I will propose a way of organizing twenty hours work such that, at the end of it, you have produced the first draft of a 40-paragraph theory paper. This draft can then be edited into shape for publication. In outline, it will look as follows:

1. Introduction (3 paras)
2. Historical Background (5)
3. State of the Art (5)
4. Critical Occasion (5)
5. Conceptual Analysis (3 x 5)
6. Discussion (5)
7. Conclusion (2)

Remember that each paragraph should make a single, easily identifiable claim and either support it or elaborate it. It should consist of at least six sentences and at most 200 words. It should be written in exactly 27 minutes.

The introduction will consist of three paragraphs. The first paragraph should be devoted to a history of your field up to the present. The scope of this history will depend on your judgment. Whether your history starts in ancient Athens, in eighteenth-century England, or in Paris of 1968 depends on the contribution you want to make. The second paragraph should be devoted to the present state of the theory. What is the reigning consensus or standing controversy that defines your field of research. This, obviously, should be the state you want transform in some interesting way, either by settling a dispute or unsettling an agreement.

The third paragraph should announce your contribution. "In this paper, I will argue that..." Notice that "supporting or elaborating" this claim, which is about your paper not your theory, does not yet require you to argue your position. You only have to describe a paper that would make such a contribution. And that means you will essentially be outlining your paper. Now, you have already introduced the historical background in paragraph 1, which will have space to talk about in part two of the paper, so you don't have say anything more here. Also, in the second paragraph you have introduced the current state of the theory, which you will elaborate in greater detail the third part of the paper. What is left is to say something about how the theoretical problem you are interested in arose and why you are the right person to deal with it, to outline your analysis a little more, and to tell us why it is important, i.e., to summarize your discussion. That is, the conclusion ends with an outline of parts 4, 5 and 6 of the paper.

Part 4 takes the place of the methods section of a standard empirical paper. In a sense, you are still saying what you did, but it is perhaps more accurate to say that you are explaining what happened to you to force you into a theoretical reflection. It may simply be a development within your field (someone else's or your own empirical results, published elsewhere) or it may be an "event" like the publication of a correspondence or a translation of a previously untranslated work by a major theorist. World events, too, may be relevant here. After 1989 and 2001 there were all kinds of reasons to "rethink" the theories that framed work in a whole range social sciences. Since you're saying how the problem arose, you will also need to say what materials came into view: what texts have you read and how have you read them?

Part 5 will present your argument in detail. It's a good idea to divide the argument into sub-theses each of which can be demonstrated separately. Two to four sections of three to six paragraphs gives you some manageable space to work with here. Finally, part 6 will cash out your analysis in consequences, usually for theory, though sometimes for practice. (You might want to emphasize the important political consequences of your line of thinking.) An important class of "theoretical" implications here is "method". If you're right that we have to see the world in a new way (a theory is always a way of seeing the world) then perhaps we will have to do things differently too?

The conclusion should consist of two paragraphs, one of which states your conceptual argument in the strongest, simplest terms you can imagine. You may want to use the sentence that completes the key sentence of paragraph three (i.e., everything after "I will argue that") as a key sentence here. The last paragraph could suitably extend the history of the field that you presented in paragraph 1 and elaborated in part 2 by imagining a possible future.

I hope that's useful. Don't hesitate to add your own suggestions or questions in the comments.

Monday, November 03, 2014

What We're Doing

I'm grateful to Jonathan for bringing The Universal Mind of Bill Evans to my attention. As I point out in the comments to Jonathan's post, the difference he demonstrates may not be apparent to everyone. If we had not been told, we might experience all three improvisations simply as much better than anything we're capable of ourselves. The same goes for writing. We're not always paying close enough attention to be precise. We "overwrite", let's say.

It's interesting to see Evans's brother Harry push back on the demand for simplicity and accuracy. "To thousands of musicians such as myself: we have to overplay," he says, "because we don't have time to even get to the keyboard to sustain the rudimentary thing." Maybe I'm projecting, but I can feel exasperation in Bill's response. He can only repeat himself: "It's better to do something simple which is real ... It's something you can build on because you know what you're doing." When people explain their faults by saying they don't have time to do it well, I get a little a sad. If we were half as "productive" in academia, half as "advanced", but twice as real and precise, we would be so much better off. We would, precisely, know what we're doing.

Sunday, November 02, 2014

Wednesday, October 29, 2014

Answer from Profile Books in re Zizek

I've been in contact with Penny Daniel, managing editor and rights director at Profile Books, about the apparently inadvertent plagiarism of Jean-Marie Muller in Slavoj Zizek's Violence (Profile, 2008) that I blogged about earlier this month. Zizek, you might recall, blamed the error on his publisher, saying that his manuscript had been changed "without [his] knowledge" before publication. To me, this raised some questions.

The answer from Profile is a bit disappointing, but probably as forthcoming as can be expected. Daniel explains that they no longer have the page proofs or any other relevant files so, while she "can neither confirm nor deny what actually happened", and while it would be somewhat at odds with their "usual practice", it is probably true that a copy editor mistakenly formatted what should have been a block quote as a separate paragraph and thereafter inserted the description of Simone Weil as a French religious thinker "out of a desire to help the reader" without informing Zizek. She assumes that Zizek was given page proofs to review before publication and regrets that the error wasn't caught until now. I get the impression that it will be corrected in any subsequent editions of the book.

This will have to do, though it doesn't answer all my questions. There seem to be plenty of stylistic differences between what Zizek describes as "the last version [he has] of the complete manuscript (already copy-edited by the publisher)" and the printed version of the book, for example. I asked Daniel about this since it suggests that the manuscript Zizek has made public is not the last version that was seen by the eyes of a copy editor. Unfortunately, she had no further comment on the matter, citing, like I say, the fact that this all happened six years ago.

I'm working on a final post on this issue, which I'll probably post on Friday. For those who are interested, Adam Kotsko has in the meantime offered a defence of Zizek to keep the conversation going. I'll have some thoughts on his argument in my next post.

Monday, October 27, 2014

Grading Students and/or Reviewing Peers

I shouldn't pretend to be an expert about this, but I do talk to teachers regularly who struggle with the problem of grading student papers. It intersects with the problem of reviewing the papers that peers submit to journals, and, indeed, with the problem of evaluating what we read in general. (It also intersects with the problem of plagiarism, which I'll return to in another post.) Few people, of course, complain about good writing, and few are in doubt about what to do with such papers: enjoy them. It's the bad writing that gives people a pain. And the complaint I normally hear is that badly written papers are disproportionately time-consuming to grade/review. When they are published, they are, of course, simply time-consuming to read and understand.

To my mind, this complaint is rooted in the misconception that academic writers have the right and the power to waste their readers' time. If they write badly, on this view, a writer is always unilaterally making a draw on the reader's time and effort, about which the reader has no say. We are to imagine that the reader is powerless to stop reading when subjected to such treatment by a writer but nonetheless has full rights to gripe and complain when it happens. What is forgotten is that the harm is not being done by the writer to the reader but by the writer to his or her own damnable ethos, i.e., to the reader's opinion of the writer, i.e., to the writer's shot at a good grade or a publication. We have to remind the reader that he or she is in full control of how much time to spend in the company of an author and how to spend it.*

We have to level the playing field. All writers should be given a fair, equal shot at the reader's attention, whether as an examiner or a reviewer. Indeed, the well-written paper should, ultimately, get more attention (and that means more time) than the poorly written paper, but I'll return to that issue in a moment. The best way to level the field is to have a set of standards for what a paper should be able to accomplish in the first three paragraphs, or the first two pages, or the first 10 minutes of reading. You can have your own reading strategy, so I'm leaving this somewhat open. My actual advice, if you're interested, is to be pretty clear about what you want to be told in the first 600 words, what the conclusion should look like, what the literature list should contain (and how it should be set up). We can expect writers to have polished their prose especially in the introduction, so it's entirely fair to form an opinion about the writer's style based on the impression that the first few paragraphs leave you with. Is this a clear, lucid thinker? Is this a careful, conscientious writer?

Assignments vary in length and complexity, and the amount of time you're going to spend grading each paper will vary accordingly. But don't be ashamed about the fact the grade is usually determined already in the first few minutes. Sometimes, a student/peer will fool you, intentionally or not, but most of the time the quality of a paper's introduction actually is predictive of the grade it will get, or its publishability in a journal. What is important is that each paper you read has the same opportunity to impress you. If it squanders that opportunity by being sloppily written and poorly organized, so that when you run out of time you've learned very little about what is on the writer's mind, that's not your fault, nor the fault of circumstance. It's the incompetence or indifference of the writer that is to blame. And that is actually relevant for the grade. Spelling does, in that sense, count.

I said that good papers should, ultimately, get more attention than bad ones. How might that work if all papers are graded and reviewed in the same amount of time? This is where feedback comes in. There's nothing more unfair than the classroom in which the C students all get detailed criticism of their arguments and grammar, while the A students get a nice big A, a smiley, and one-word comment like "Brilliant!" "But, surely," it will be said, "the C students must be told where they can improve." Yes, of course. But so, surely, must the A students! Getting an A in a course or writing a publishable paper does not mean can't improve. It just means you're starting at a high level.

So here's my advice. Let detailed feedback, beyond the mere grade or accept/reject decision, depend on the writer's willingness to receive it. Give the student their A, B, C etc. in a quick and efficient manner, based on some pretty objective characteristics of the paper. Then, if the student wants to hear more, give them an additional assignment: have them rewrite the introduction into three, 27-minute paragraphs, or have them produce an after-the-fact outline. (That's if you're me; give them whatever small assignment you like.) Then meet with them and discuss it. The students who want to improve, no matter what their grade, will do the assignment, and now you have their full and specific attention. You can spend time on them without feeling like you're wasting it. The students who don't care (and they get all kinds of grades, I will remind you) will not do the extra bit of work. That's fine too. Everyone's happy.

In the case of a review report, you don't usually have an opportunity for this kind of interaction, of course. So I would suggest saving your detailed line-by-line criticism for a paper that you think should be published. If you are going to recommend a revise and resubmit, limit your feedback to the parts that should be reworked, and let the more detailed feedback come in the next round, after the author has demonstrated a willingness to do that work, and a basic comprehension of the need for it.

In short, my advice is not to resent the task of evaluating the work of others. One of the basic functions of academia is to give people an accurate sense of how smart they are in a particular subject. And there really are differences between people on that scale. For every domain of knowledge, at every level of education, there are those who deserve As, those who deserve Cs, and those who deserve Fs. Part of your job is to assign those grades as fairly and efficiently as possible. It's a perfectly legitimate business.

*This paragraph has been rewritten for clarity. (See comments.)

Tuesday, October 21, 2014


[Six years ago, as a demonstration, I wrote two five-paragraph essays: "Composition" and "Decomposition". I wrote them for the exercise, to demonstrate something formal. But I just reread them and kind of liked them in themselves. Here's what happens when they're combined into one.]

Composition is the art of constructing texts. In his classic, if somewhat forgotten, little handbook, Rhetoric and English Composition, Herbert Grierson points out that this can be understood on three levels: the construction of sentences, the construction of paragraphs and the construction of whole texts. But he also emphasizes the relation between these levels. Not only is the "the ideal paragraph" essentially "an expanded sentence", the work should always be guided by the same principles. At all levels, "coherence and the right distribution of the emphasis as determined by the purpose you have in view" are paramount. There is a sense in which style is just your "choice of words". Composition demands that we put words together, in sentences, paragraphs, and texts, to achieve a well-defined goal.

In a sentence, words are put together grammatically in your attempt to mean something by them. In isolation, words don't mean anything very specific; they do not convey a clear meaning. In fact, until a group of letters is positioned among other words, it is unclear even what language it belongs to. The word "hat", for example, refers to something you wear on your head in English but is a form of the verb "to have" in German. A word really only finds its meaning in the context of a sentence, and here its meaning is determined by usage. Usage is the governing principle of grammatical correctness and that is why the way you construct your sentences goes such a long way towards defining your style. What is often called "accepted usage" by grammarians and editors determines the effect that particular words have in particular combinations and in particular settings. The style of your composition, as you try to get the words to mean what you want to say, is your struggle with what usage (in your particular context) would have your words mean before you started using them. This struggle takes place first and foremost within the sentences you write.

If a sentence is an arrangement of words, a paragraph is an arrangement of sentences. There is obviously no grammar of such arrangements, but there are some principles to keep in mind. First and foremost, a paragraph should have a unified purpose. This means that all the sentences that are gathered in a paragraph should, at a general level, be about the same thing. They will not, of course, say the same thing, but they will each play a specific role in elaborating, supporting or illustrating a common subject matter. This, in turn, is but one part of the overall subject matter of the text. "The bearing of each sentence upon what precedes," says Grierson, "should be explicit and unmistakable." In an important sense, then, the text's agenda is not advanced (moved forward) within its paragraphs but between them. A paragraph slows down and dwells, as it were, on a particular element of the larger subject covered by the text.

Ultimately, a composition consists of a series of paragraphs. If you looked only at the topic sentences (usually the first sentences) of these paragraphs, you should get a good sense of how the text is organized and what it wants to accomplish. When writing a text, it can therefore be useful to generate an outline simply by listing these topic sentences and perhaps to organize them further using what will turn out to be section headings. You will here need to decide what the organizing principle of the text as a whole will be: a narrative plot, a logical argument, a call to arms, a set of impressions, etc. "It is," says Grierson, "an additional satisfaction if in an essay or a book you can feel at the end not only that you have derived pleasure from this or that part of the work, or this or that special feature—the language, the character drawing, the thoughts, the descriptions—but that as you lay it down you have the impression of a single directing purpose throughout". The reader should feel, as Aristotle also said, that there was a reason to begin exactly where you began and end exactly where you ended. The composition of the whole text depends on the way the paragraphs are strung together to achieve this single purpose.

Texts are constructed out of words, not ideas, as Mallarmé might say. Words are arranged into sentences, sentences into paragraphs, and paragraphs into whole compositions. The correctness or rightness of these arrangements depend on their overall effect, that is, their aptness to a single purpose. This purpose, which gives the composition its coherence, makes demands of the text as a whole, and the demands of the text will make demands of the individual paragraphs, which will then pass further demands onto the sentences. It's really like any other construction project: the smaller parts must contribute to the larger whole; they must make themselves useful. It is often in working with the sentences that one discovers the style that is best suited to accomplishing the overall goal, always working under the general constraints of usage. It is also here that you might find a truly creative solution to the problem of writing, which can be a very complex problem because there are so many different reasons to write. Composition, in any case, is the simple art of solving it.

But is it really so simple? Grierson insists that good composition is characterized by "coherence and the right distribution of emphasis as determined by the purpose you have in view". But who are "you"? Grierson clearly assumes that the writer, operating somewhere well outside the text (somewhere beyond the page on which the words have been gathered) is in control of his (always his) expression. He would no doubt install the reader in the same space. But why, then, do these two subjects (of the same merciful lord) need a text? Couldn't "you" and "I" just talk to each other? Can't we all just get along? No, let us assume that the only "you" to speak of is the reader. Texts often crumble in our hands when we pick them up. If "composition" denotes how a text is "put together", "decomposition" might denote how they "come apart". If construction is about how a text is built up, how it is assembled out of words, sentences, and paragraphs, deconstruction is about how a text breaks down, how it collapses, as Derrida taught us. Decompsition is about activating the incoherence of the text, its excesses of emphasis, the indeterminacy of its always multiple points of view.

A text coheres if it is read charitably, that is, morally. Cued by markers that suggest the text wants to describe a place, or tell a story, or put forth an argument, we let our familiarity with space, time, or logic respectively, (and always respectfully) inform our reading. Herbert Grierson emphasizes the we have "knowledge by aquaintance" of these "orders of phenomena", that is, we are continually aware of these orders in going about our ordinary business. Coherence is an attribute of the surface of discourse. The first sign of the underlying incoherence of a text is therefore the superficial interference, or dissonance, that may be observed between spatial, temporal and conceptual orders. The story may at first seem plausible, but not in the place suggested. The arrangement of things in the room may be quite reasonable but how did they get there?

All sorts of embarrassing details lurk in the clash of orders that deconstruction brings to the fore. Most important, however, is the order that Grierson leaves out, or (more charitably) subordinates to the order of thought (logic): the order of emotion. Words and sentences do not just evoke thoughts, facts and acts, they also evoke particular feelings. Too often, writing makes too little or, in other literature, too much of the emotional response of the reader. It underestimates the indignation or overestimates the emphathy of the reader. And we, as readers, often much too easily play along. "[The] law of coherence is a heuristic rule," said Foucault in The Archaeology of Knowledge, "a procedural obligation, almost a moral constraint of research." It tells us

not to multiply contradictions uselessly; not to be taken in by small differences; not to give too much weight to changes, disavowals, returns to the past, and polemics; not to suppose that men's discourse is perpetually undermined from within by the contradiction of their desires, the influences that they have been subjected to, or the conditions in which they live.

To decompose a text is precisely to confront it, not with the "order of phenomena" normally supposed by the reader (to have been intended by the writer), but with the disorder by which the text is strangely disposed. It happens whenever we shamelessly insist on reading the text.

Deconstruction is a shift of emphasis while reading. It actively challenges the principle of composition: "coherence and the right distribution of the emphasis". We have just dealt with coherence; to better understand the decomposition of emphasis, consider two different ways of playing Bach. Wolfgang Sandner has said that Keith Jarrett plays Bach "emphasizing nothing, demanding nothing, concealing nothing and withholding nothing. In one word: natural." He cites the pianist himself in support of this thesis. "This music does not need my assistance," says Jarrett. "The melodic lines themselves are expressive to me." Compare this with what Sandner says of perhaps the most famous interpreter of Bach, Glenn Gould. "Obviously," writes Sandner, "he did not even trust his own analyses. He remained in search of clues. He spread the tones, loosened their coherence, emphasized side-lines and with his extreme tempi subjected the works of Bach to a kind of stress test."

There may be no better way to summarize the spirit of deconstruction: don't trust your own analyses but continue the search for clues; emphasize side-lines and read at extreme speeds (whether fast or slow); all in all, subject the text to a stress test. You can experience the difference by listening to their recordings of the thirteenth prelude in Book I of Bach’s Wohltemperierte Klavier. By slowing it down, and emphasizing the space between the tones, Gould is able to draw our attention to our own contribution to the music, our listening. It is important to keep in mind that Sandner is talking about two performances of the same composition, two "readings" of the same "text". The composer may have preferred one or the other, but there is no basic sense in which one is "right" and the other "wrong". Each reveals something about the composition. A "natural" emphasis may offer a great deal of immediate aesthetic pleasure, to be sure, but deconstruction is the pursuit of a more difficult beauty. Decomposition results from an excess of emphasis.

It often assumed that good academic writing is rooted in a singularity of purpose. "The specialist," Grierson tells us, "need think of nothing in regard to style but clearness and precision." And he alleges a reason: both his subject-matter and his audience is given to him so his point of view is largely fixed in advance. He need only ensure that his style does not obstruct the audience's view of his subject. "Everything else is an intrusion, and an unnecessary intrusion, because he can count upon willing and patient readers who desire to study the subject". For Grierson, specialist writing is a particular way of establishing the point of view of a text, which in turn "determines everything". Since, following Aristotle, the point of view depends on the speaker, the subject-matter, and the audience involved, says Grierson, there is really an infinity of possible points of view for any text.

But he makes a crucial assumption. A given text, he notes, will have a single point of view; the writer can make a series of rhetorical decisions to, as it were, "fix" it. Deconstruction draws this assumption radically into question, beginning with the allegedly singular purpose of the writer; for even the most academic writers are torn, at least, between enlightening their readers and furthering their careers. This immediately suggests multiple audiences, but it also suggests that a text is about any number of things that are not mentioned in the abstract. Deconstruction attempts to chronicle the "wars of signification" that take place behind the often irenic facade of an academic text. What we might call "academic composure" is fostered by an illusion of the writer's singular purpose, namely, that his only intention is to instruct a "willing and patient" reader, one whose only desire, in turn, is "to study the subject". Once we drop this assumption the text begins to decompose.

The essential thing is to read the text. To deconstruct it, we loosen its coherence, redistribute its emphasis, and question the unity of its purpose. All of these are acts of reading. It is true that deconstruction demands that we set aside the usual obligations of reading; it demands that we read against what are often the clearly marked intentions of the author. But deconstruction should not be taken as a personal attack on the author. Grierson assures the writer that the text will be read in the light of the reader's "knowledge by acquaintance" of the basic orderliness of experience, that it will be read with a natural emphasis, that its readers, desirous only of study, will be patient and willing. Such assurances, when believed, produce a particular kind of text, and it may be a very good one. Every once in a while, however, we need as writers to see what our assumptions about the reader have actually accomplished. On such readings, the text will begin to come apart, sometimes like a collapsing structure, and sometimes like a mound of compost. We can use the results of such decompositions when we compose texts of our own.