Archive for category Updates
Colorizing Civil War Photographs
Posted by johnacaseyjr in Civil War, Updates on October 10, 2013
This morning a post showed up on the C-19 Listserv for nineteenth-century Americanists that linked to a Daily Mail article on the efforts of two technicians to colorize Civil War era photographs.
You can read the article and see samples of their work here: http://www.dailymail.co.uk/news/article-2446391/Amazing-Civil-War-photographs-created-colorist-bring-eras-heroes-characters-life-color-time.html
I have to admit that seeing these well-known photos in color was fascinating. Especially interesting was the ability to see the color of the landscape (indicating season) and also the tint of the uniforms. We talk so much about the “blue” and “grey” and yet most of the imagery we have of them is black and white.
But I am also wary of the notion of improving history through technological advances rather than simply using it to store documents in an alternate format for preservation purposes. This is something that could potentially be a lot more damaging to the archive than Ted Turner’s ill-fated effort years ago to colorize classic cinema.
Postscript: A Civil War scholar responding to the C-19 listserv post a few minutes ago reminded me in his comments that photographs were hand colored in the 19th century. So again the technology is not the issue here. It’s the motive. Why color these photographs?
Have any scholars commented on the theoretical implications of color in historical documents? What is the real psychological difference between a document in color vs. one in black and white?
I know when I teach film in my literature courses, black and white films tend to be perceived by some students as boring and other as more authoritative (cinema rather than film). I call it the “black and white” effect. I wonder if this is true of print documents and photos?
Can vs. Should: A Subtle Distinction (?)
Posted by johnacaseyjr in Higher Ed, Updates on October 4, 2013
Our examination in class of the Digital Humanities has shown the difference between visions of what this movement might mean to the Discipline of English. The makers of digital tools, the old guard of Humanities Computing, still provide a compelling case for the need to actually create something in order to consider yourself a Digital Humanist. Users of existing digital tools show how reading and scholarship are rapidly shifting due to technological advances. And Critics, well, they are struggling to catch up. What would a critical theory of the Digital Humanities look like and is it needed?
Although there are many objections to the Digital Humanities as a field of study, I think one major point of contention is the rapidity with which digital projects come and go. To the less technically inclined this seems a waste of scarce resources of both time and money. Why go to such great…
View original post 414 more words
Project Biocultures Talk (Ato Quayson)
Posted by johnacaseyjr in Updates on September 21, 2013
On Monday I attended with several of my students from English 240 the Project Biocultures talk by Ato Quayson, Professor of English at the University of Toronto and founding Director of the Centre for Diaspora and Transnational Studies.
Professor Quayson began by describing his approach to Disability Studies, much of which came from his book Aesthetic Nervousness. The moderator for the talk, Leonard Davis, rightly noted that Quayson’s approach to disability was broad but in a theoretically enabling way. This observation was born out in the second portion of the talk where Quayson applied his theoretical approach to disability to Samuel Beckett’s novel Murphy (1938).
Professor Quayson contended that any coherent theory of disability had to acknowledge not simply its presence in literature and culture but also the “effect” of that disability in a given society. Aesthetic nervousness, as Quayson described it, was where representational practices at…
View original post 374 more words
Shteir’s Complaint
Posted by johnacaseyjr in Chicago Literature, Updates on April 27, 2013
(Note: This piece is also posted on my current events blog Man Without a Newspaper.)
By now I’m late to the discussion of the controversy surrounding De Paul University Theater Professor Rachel Shteir’s April 18th review of three recently released books on Chicago– Thomas Dyja’s The Third Coast: When Chicago Built the American Dream, Jeff Coen and John Chase’s Golden: How Rod Blagojevich Talked Himself Out of the Governor’s Office and Into Prison, and Neil Steinberg’s You Were Never In Chicago.
Her article has started a heated debate between those who agree with her that Chicago has an unwarranted sense of self-confidence (i.e. “boosterism”) and those who feel that she’s a bitchy New Yorker carrying on in the age-old rant that Chicago is a provincial or “second city” in comparison to the coastal greatness and finesse of the Big Apple.
As fascinating as these critiques are to rehash (they are at least a century old), their writers have neglected to point out three of the largest flaws in Shteir’s piece.
The first is one of genre. Shteir is a terrible book reviewer. Perhaps she thinks that she has attained the status of an Edmund Wilson or Susan Sontag who could ramble on about whatever they liked while ostensibly “reviewing” a book or film. That, at least, is what Shteir does throughout much of her review. In fact, the only section that truly feels like a book review involves Thomas Dyja’s masterful book, which deserves a much more incisive commentary than Shteir can provide.
A second flaw manifests itself in her categorical confusion between literary writing and public policy. Rahm Emanuel and his staff are indeed”swaggering” in their boostership for Chicago. So are local businesses and developers. That’s their job. Chicago literary writers, on the other hand, are beholden to their own idiosyncratic ideals. Part of our problem as a city is that the published writers who are labelled “Chicago authors” are so divergent that a clear picture is hard to assemble. What is the common thread that links Gwendolyn Brooks, Nelson Algren, Mike Royko, Ida Wells, Aleksandar Heman, and Brigid Pasulka? Immigration is about the best I can do, but that applies to many U.S. cities.
This leads me to my final point, and that is Professor Shteir’s silence on the role the publishing industry (most of which is located in her beloved New York) plays in skewing the image of Chicago writing and culture that she purports to explain to NYT readers. I can think of many Chicago authors, quite a few of whom are close friends, whose works answer Shteir’s charge that Chicago needs to be more self-critical. Yet they can’t find a publisher willing to take a risk on their fiction or they publish in small presses who hardly ever come under scrutiny by the likes of the NYT book review.
Shteir’s review should remind cultural critics that public intellectual work has standards of its own. Just because you’re not under the unrelenting microscope of the peer-review process doesn’t mean that you can get away with sloppy reasoning and evidence. It should also remind us that generalizations about cities (or anything for that matter) are limited by thousands of qualifiers. “Chicago literature” or a “Chicago style” are simply heuristics.
On a more personal note, I’ve lived in Chicago for 13 years. A transplant from Vermont, it took a while for me to get used to how flat the landscape is in the city. I’ve grown to love Chicago over that time in the complicated way described by Nelson Algren in his book Chicago: City on the Make–“Yet once you’ve come to be part of this particular patch, you’ll never love another. Like loving a woman with a broken nose, you may well find lovelier lovelies. But never a lovely so real.”
Like any city, town, or village, a resident needs to learn how to take the good along with the bad if they plan to become “part of this particular patch.” I’ve learned how to do this in my time in Chicago. Shteir apparently is still deciding if its worth her time. I wish her luck.
“I Returned To My Tree In The Rain”
Posted by johnacaseyjr in Civil War, Updates on April 13, 2013
My research has had me reading a lot over the past few months about trauma, specifically combat related trauma. As I prepared my remarks for a presentation at the New England Modern Language Association (NEMLA) conference a few weeks ago on this subject, I was particularly struck by the conundrum presented to scholars by Post Traumatic Stress Syndrome (PTSD).
On the one hand, PTSD has now made a vast field of study possible in non-medical and non-scientific disciplines that simply was not there prior to the 1980s. Without PTSD’s seemingly stable foundation, it’s hard to imagine “trauma studies” finding much ground in academic and public circles. Yet, on the other hand, PTSD prevents us from seeing and understanding a wide range of responses to horrific events that simply do not fit within its paradigm.
These thoughts came to me most strongly in relation to Ulysses S. Grant. The former President and Union General’s Personal Memoirs (1885) played a minor though significant role in my dissertation. His experience of combat was so different from that of front line soldiers such as Ambrose Bierce and Sam Watkins that I hesitated to include him in the same chapter with those authors. Our current conception of trauma seemed to exclude him from the kinds of troubled and troubling memories that marked much of Watkins’ and Bierce’s work.
Nonetheless, I persisted in my curiosity at what impact (if any) combat trauma had on Grant’s narrative. What I found in his chapter on the battle of Shiloh rewarded my persistence.
On the night after the first day of battle Grant says:
“During the night rain fell in torrents and our troops were exposed to the storm without shelter. I made my headquarters under a tree a few hundred yards back from the river bank. My ankle was so much swollen from the fall of my horse the Friday night preceding, and the bruise was so painful, that I could get no rest. The drenching rain would have precluded the possibility of sleep without this additional cause. Some time after midnight, growing restive under the storm and the continuous pain, I moved back to the log-house under the bank. This had been taken as a hospital, and all night wounded men were being brought in, their wounds dressed, a leg or an arm amputated as the case might require, and everything being done to save life or alleviate suffering. The sight was more unendurable than encountering the enemy’s fire, and I returned to my tree in the rain.”
Throughout most of Grant’s memoirs, he maintains a firm hand on the narrative. Even though Grant wants his readers to see him as a man driven by the dictates of fate (“Man Proposes God Disposes” are the first words of his text), his narrative technique is strictly controlled by the author. It is only in rare moments such as the one above that Grant drops his public persona and we gain a glimpse at the ordinary man behind that name.
What we see is a man who may not fit the paradigm associated with PTSD. However, he is clearly touched by what he has witnessed, so much so that he writes about it over 20 years later. Grant is confronted in that log-house with the consequences of military command. He doesn’t like what he sees.
Would it cheapen what soldiers at the front line experience to consider this trauma rather than simply garden variety guilt or regret? I don’t know. It’s still an issue I’m puzzling over as I consider the traces of war in Civil War veterans’ writing.
What I do know is that it’s time for scholars to find a way to talk about trauma that doesn’t automatically gravitate towards PTSD.
A Return to the Text: Reflections on NEMLA 2013
Posted by johnacaseyjr in Higher Ed, Updates on April 3, 2013
As Stanley Fish discovered more than a year ago, it’s hard to call a trend based simply on the number of sessions listed in the program of an academic conference. That’s why I’m hesitant to call what I observed at NEMLA 2013 a trend just yet. It is worth noting, however, that a shift seems to be occurring among a sizable number of literary scholars and that shift could prove comforting to the technophobes among us who shudder every time they hear the phrase “digital humanities.”
What I observed in panels such as “Teaching the History of the Book to Undergraduates” and “Teaching How We Read Now” was the already well-documented movement away from post-structuralism and identity-based theories in favor of textual analysis. Yet this is far from the old-fashioned textual analysis practiced by literary scholars since the days when Greek and Latin authors constituted literary study on United States college campuses.
QR codes are now embedded in Medieval manuscripts that reveal how Old English in Chaucer should sound. Hyperlinks allow multiple editions of a text to be read simultaneously and compared. Computer algorithms allow for the analysis of an author’s use of language to determine who wrote an anonymous work of fiction. Data mining techniques help scholars to create word clouds and thought maps to dramatically visualize the zeitgeist of an era or show the evolution of language in graphic terms.
The techniques are new and in some cases require more advanced technical knowledge than the average humanities scholar might possess. But the newness of the techniques with all their bells and whistles hide the reality that philologists (in the guise of DH gurus) are cool again.
Where this turn in literary scholarship will eventually lead is anyone’s guess. I for one am glad to read something for a change that isn’t Foucault.
A College By Any Other Name
Posted by johnacaseyjr in Higher Ed, Updates on November 4, 2012
Mid-terms have come and gone at the University where I teach and work as an administrator. With their passing, students are left to ponder just what it will take to get them through the rest of the semester. Some will take advantage of the services available to assist them as they try not to buckle under the growing burdens of their blended school work, jobs, and social life. An even larger number, however, will fall by the wayside and drop out of their classes.
This is especially true of the First Year students I teach. ACT statistics from 2012 show a first to second year retention rate at all of the United States’s colleges and universities they surveyed of approximately 67%. Even if the financial burden of going to college were not as bad as it is today, this rate is still alarming. It is indicative of an educational system that is good at persuading students to enroll, but not as good at ushering them towards the completion of their degree.
Part of the problem is the message that parents, educators, and public figures such as President Obama send to prospective students. First they tell them that college is a surefire ticket to a better life. And then they convince prospective students that any college and degree program will do. All the would be students need concern themselves with is that they hurry up and get a BA before its too late.
A major problem with this message is that the first assertion is a selective interpretation of the truth. Statistics show that “on average” college graduates have greater earning power than those with only a high school degree. The reality, however, is much cloudier. Earning power depends largely on the degree earned and the school granting the degree. As more Americans have Bachelor’s degrees, employers can be more selective. This makes the subject studied and the network of potential recommenders that a well-known school can provide more important than ever. Also, it is worth noting that the only reason college earnings have remained higher than the take home pay of non-college graduates is that the average wage of high school educated employees has plummeted since the 1980s.
Armed with this faulty information, students are then fed the equally faulty perspective that all institutions of higher education are essentially alike. How many students do you know of who are savvy enough to parse the distinction between a college and a university? How many faculty can do this for that matter? What does a community college really offer? How about for profits? Students are left with the impression that college is vital to their future, but then are left essentially adrift to figure out where they should go on their own. Is it any wonder that undergraduates are often better at comparison shopping for a smartphone than they are at picking out a college?
One way to alleviate this problem is to be honest with would be students. Don’t discourage them from going to college, but explain that, depending on what career path they are intent on pursuing, a college degree might not be necessary. There are numerous certificate programs and high school vocational programs that can place students in satisfying careers that pay a living wage. Additionally, there are two-year colleges that can either serve as a place for would be students to discover what they are interested in studying or provide them a skill that is immediately applicable to the workforce.
Making these career track options more visible and more viable will then enable colleges and universities in the United States to stop marketing themselves as job training centers. Four year institutions of higher learning should busy themselves imagining the jobs of tomorrow rather than placing its students in the popular fields of today.
Revisiting the Digital Divide
Posted by johnacaseyjr in Higher Ed, Updates on October 15, 2012
Much of the research on the “digital divide” focuses on individual users and demographic groups that have traditionally had limited access to technology. A recent study by the Pew Research Center continues this trend. Their findings indicate that thanks to mobile technology, specifically the smart phone, internet use among all social groups is increasing. Fear of technology is also fading as once excluded groups learn digital literacy.
Although these studies are heartening to read, indicating gradual progress towards greater access to technology for all citizens, they fail to take into account the digital divide that exists within educational institutions. While television, radio, and internet news providers have been busy bashing the teacher’s unions and tearing apart the educational policies of “No Child Left Behind,” precious little has been said about the uneven technological infrastructure of our nation’s schools.
For every school with access to i Pads and state of the art computer labs, there are hundreds with only a handful of aging computers (usually in the library) that are available on a first come first served basis for internet research and word processing. This problem is endemic throughout the current educational system, reaching as far as the ranks of higher education.
Right now I am writing this blog post at home on my personal laptop. Partially this decision was made voluntarily, as I wanted to write during the evening in the comfort of my home and not use work resources for non-work related activities. Even if I had wanted to write this post earlier at work, however, I could not.
I share an office at my institution with four other Non-Tenure Track Faculty (NTT as we’re calling them these days). At one point, we had a desktop computer that was five years old. Not surprisingly given the CPU intensive nature of WEB 2.0, this machine died during the summer semester.
In its place, next to the CRT monitor (i.e. the kind that looks like an old TV), mouse, and keyboard of the old computer, sits a seven-year old laptop–a PowerBook G4. This machine was wrangled from the department after over a month of hectoring our IT guy. I had never even heard of this particular brand of Apple laptop so I took the time to search for information about the system on Wikipedia. It turns out that the “new” computer in my office is the precursor to the now ubiquitous Mac Book.
With its limited CPU power and an outdated browser, the most I can do with this laptop is check my email and read websites that aren’t overly graphics heavy or interactive. On most days I go upstairs to the computer lab and wait to use one of the three computers in our departmental computer lab. I also have the option (unlike most of my colleagues) of using the computer in my other office where I serve as an undergraduate studies program assistant.
Added to these frustrations is the lack of wireless internet access in either of my offices, which prohibits me from bringing my personal i Pad to work and getting around the technological limitations of my work space. At one point, I was able to “hack” my way into the network by plugging the internet cable in my teaching office into my own laptop, but as of today our internet connection there is down. This also makes it impossible to use the telephone in that room as my institution switched a few years ago from regular phone service to VOIP (voice over internet protocol).
If we move from my early twentieth century office into the classrooms where I teach, the situation is only slightly better. In a course I designed to teach digital literacy and multi-modal writing to my students, the most advanced technology in any of my three classrooms is a flat screen monitor with a VGA cable that allows me to plug in my own laptop and display its screen on a 25″ television. Wireless access is available in all three rooms, but that assumes that my students can afford to bring their own technology to class as I have.
“Plug and Play” is better than nothing in a world where technological access is no longer a luxury but a precondition for education to take place. Yet it places the burden of technology’s cost on the students and educators. Not only is this unfair, it also sends a strange message to our students: “You need to be educated for the jobs of the 21st century, but we will not provide the tools.” No wonder self-learning is coming back into fashion. Why pay for school when you can buy a laptop and let the internet teach you the skills needed to survive in a tech-driven world?
Now I should perhaps qualify my statement/rant above by reiterating the fact that I am a NTT faculty member. I’m also an English Professor. Perhaps things are different for the TT faculty in my department or are significantly better in other programs at my institution. My suspicion, however, is that while the technological infrastructure might be less antiquated than what I described above it is still inadequate to meet student needs.
When we talk about the digital divide, we need to remember that surfing the internet is a skill easily learned alone at home. Using the web to your advantage, however, is a skill that should be learned collectively in the classroom. Regrettably, this can’t happen when many educators work in an environment designed to teach Baby Boomers to fight the Red Menace.
Are We Technology’s Killer Ap?
Posted by johnacaseyjr in Updates on November 8, 2011
After reading Cathy Davidson’s book Now You See It, a work that examines the potential of technology to reshape the ways in which we learn and work, I thought it would be beneficial to get the other side of the story. Sherry Turkle’s new book Alone Together: Why We Expect More From Technology and Less From Each Other provides a perspective that is vastly different from that of Davidson and other Digital Humanists who see technology as a way to enhance our humanness and connect with each other in more productive ways.
Turkle sees technology as a hindrance to meaningful human interaction. First examining humanoid robots and then exploring social media, she argues that what we are seeing in both instances is simply ourselves. Both the robot and our lists of “followers” or “friends” simply reflect back at us what we want to see. We are talking to ourselves and they (i.e. our electronic audience) applaud our performance. And, what’s more, on those occasions when we do receive a negative review they are easily unfriended or ignored.
The consequences of the shift in our emotional relationship to technology are far-reaching, according to Turkle. Most importantly, they remove mutuality from any discussion of human behavior. Everything we do is directed one way with little thought of the consequences or the response. The speed of communication also insures that thoughts will come and go as fast as leaves blowing in a strong wind. Reaction rather than sustained thought, acquaintance rather than true friendship are the rules of the day.
Turkle’s book is not meant to offer solutions to these problems but instead to outline them and offer an explanation as to their origin. On this latter point she is uniquely qualified as she has written two previous books on the connection between humans and technology–The Second Self and Life on the Screen. Turkle readily admits that she has grown increasingly pessimistic about technology as she written on the subject over time. In this third book she shows how humans have increasingly become more like machines even as machines have become more like us. Thus making the famous Turing test besides the point. We are all bots now, is the constant refrain of her text. In making this claim she shows an unlikely affinity to Neil Postman, the cranky humanist whose 1992 book Technopoly deftly outlined how humans had become the tools of their tools. Her conclusion seems to be that if more people feel the same concern she does, we will step back from the ledge and find ways to make technology work for us in ways that foster human interaction rather than mediate it.
As with Turkle’s two previous books on the subject, Alone Together is well researched. My only complaint involves the overall structure of the book, which is confusing at times. Her division of the text into one section on humanoid robots and another on social media feels artificial and makes the work appear to be two smaller texts stitched together. Additionally, there is a considerable amount of repetition in each chapter that suggests a need for more editing. The book could have been cut by at least 60 pages and still made its point effectively.
That said, Turkle’s book is worth reading by those who are suspicious about technology as well as those who embrace it with open arms. She leaves the reader much to think about and paints a damning portrait of how humans have let each other down while using technology as an excuse. One cannot help but think that Marx would approve.