Away in the Field

I know it’s been somewhat silent on the blog for a few days, because I’ve been traveling and busy making arrangements for more travel. The upcoming week I will be down at the Georgia coast doing field work, so unfortunately it will probably be quiet a little bit longer (but the good news is the only thing standing between me and a PhD will be a final chapter!). Your regularly scheduled musings on higher education will return around the middle of next week if all goes to plan.

In the meantime, let me direct your attention to a few interesting articles that I’ve read recently and will certainly provide food for thought.

This excellent article on the Chronicle deals in more detail with the subject of job training (see my own take on the issue here).

A very nice take on an important type of diversity that is, in my opinion, entirely under-discussed: socioeconomic diversity. It’s an important topic, to say the least.

A great and accessible take on setting fixed versus growth mindsets, something which every instructor should be aware of.

The Taxonomy of a School, and Why it Matters

I’ve said it time and again, but it bears repeating: I think one of the biggest problems in the national conversation about higher education is that “higher ed” is treated as one single monolithic entity. There is simply “college.” Of course, anyone in academia immediately recognizes that is a fallacy, and I honestly would not have been surprised to see the hashtag #notallschools popping up on social media. Yet the idea is ingrained in the public consciousness, and the statements and stories that enter the media show very clearly that the policy makers and pundits do not have a clear understanding of the diversity of higher ed institutions. Furthermore, there seems to be little appreciation of the different roles schools across the spectrum carry out, something I’ve written about before to a small degree. In my opinion, this lack of understanding is a big problem, in no small part because students cannot make smart decisions for their future if they do not understand the types of paths available to them. I have no doubt that the student debt issue is at least partially a consequence of a failure on the part of higher ed to dispel the notion of a “one size fits all” school (though there are possible reasons for that). Poor understanding of institutional missions also makes responsible policy making nearly impossible; if most policy makers’ chief experience comes from their time as students, then their decisions and pressures will reflect only that one role of the university system. Clearly, we need to do a better job of explaining what particularly types of schools do.

Schools’ Fundamental Roles

There are many possible ways that higher ed institutions can be divided: public or private, 2-year or 4-year, types and proportions of degrees conferred, etc. Most if not all of these aspects factor into the commonly used (though perhaps less well known outside higher ed) Carnegie Classification. Although the Carnegie scale is extremely useful, I think for simplicity, we can boil it down to one aspect: the relative emphases on teaching versus research. A school like UGA, and many of the other big names in state schools, is primarily a research institution, whereas a school such as my undergraduate college Hendrix is undoubtedly a teaching-centered institution. This is not, of course, to imply that teaching is unimportant at UGA or that research is inconsequential at Hendrix, but there is no gainsaying the very different roles the schools play. Even in terms of the research, both schools have very different focuses; here at UGA, research is the end unto itself and is intended for publication, while at Hendrix the research is clearly about the learning experience (and many projects focus on producing conference presentations that students themselves give). Those differences are almost always reflected in hiring practices as well: at an R1 like UGA, nothing can rescue a bad research talk, and at Hendrix, no one will be hired who gives a bad teaching demonstration (though in all honesty, in today’s market, a successful candidate needs to nail both aspects at either school).

So why do I think these distinctions are important? For one thing, in my experience, many students and families do not really understand the type of school they are applying to. While I firmly believe a good education can be had anywhere with enough effort, I also think students should know from day one how the faculty might differ from place to place. There are many great teachers here and at other large universities, but the vast majority of faculty members here are not hired for their teaching; unfortunately, I often see the negative effects of that when observing (directly and indirectly) classes. On the other hand, there is certainly something to be said for the rich research experiences that potentially exist for undergraduates. Although I am biased towards small liberal arts colleges for a variety of reasons, many students have different wants and needs, and they need to understand a university environment and its mission to make the best decision. Even more importantly – something I feel very strongly about, as I’ve said before – many students’ careers and aptitudes are simply not good fits for the 4-year school, and they could save a lot of money and frustration being aware and availing themselves of a quality 2-year school.

In terms of policy, if schools do not clearly articulate their own roles, they face the very real threat of being held to standards or pressures that are not in line with their own missions. The case of North Carolina comes to mind especially, and their ill-informed attempt to force a 4-4 teaching load on professors (that’s 4 courses per semester, if you’re not familiar with the jargon). Policy makers clearly saw the universities’ main role as institutions of instruction, and attempted to work in that mold despite the fact that such a bill would have crippled the actual main mission of producing research. In a similar vein, policy makers seem to focus particularly on the large R1 institutions when it comes to job training, and in so doing tend to ignore the schools that actually perform those functions and are often in dire need of support: community and technical schools. Now, I’m not saying that the big R1s should be left alone and not held to higher educational standards, but I am saying that we need to have a clear understanding of schools’ roles if we are to have this conversation.

Shifting Roles

Now that I’ve spent a good bit articulating the roles of these different school types, I need to mention that they are not in any sense solidly fixed. For a multitude of reasons, many schools are facing pressures to adopt more fluid or integrated missions. Many of these are internal and aim to improve student success: Hendrix (as just one example) is one of many liberal arts schools actively increasing student involvement in research to make their chances of post-graduate success (e.g. acceptance into professional school) better. UGA, for its part, recently announced an initiative to emphasize experiential learning; although this is not a new idea, the move on the part of a large R1 is an important signal of some of the changes sweeping through schools. In many ways, what underlies these changes is the realization that teaching and research are not necessarily separate endeavors, and certainly not mutually exclusive ones, and so I think this increasing fluidity on research versus teaching is an overall good thing. For the part of associates colleges, although their roles have not really changed as much, they have been getting more and more attention as a legitimate career path, and I will admit I was surprised to learn that there is much more to them than job training.

Of course, having said all that, I must point out that change for change’s sake is dangerous, and especially troublesome is what happens when I school tries to be something it is not. The case of Georgia Southern comes particularly to mind; it is an excellent regional teaching-centered university, but through a series of shifting leaders has been trying to pivot towards more and more research. This fight to be something that the school is not has led to substantial friction between leadership and faculty, including a very well publicized now-public email (though for the life of me I can’t find my link). Unfortunately, this may be symptomatic of the larger drive for schools to be more and more a one-size-fits all model in order to secure more students and the tuition-funding associated with them, but no one wins when a school loses its identity. Although we are undoubtedly in a time of substantial shifts in higher education, we can never lose sight of the roles our schools play. No school is completely like the others, and they cannot be treated as such.

Why a Liberal Arts Education Matters

My time at Hendrix College changed my life. I can say that without a breath of hesitation. Granted, the college/university years are themselves years of great change, a massive transition from youth to adulthood, but I have no doubt that the person I am today is a direct product of my experience at Hendrix. I can further say that undoubtedly who I am would have been very different had I attended another school, such as a big R1 institution. Although Hendrix did and does many great things (typical of a quality liberal arts college), if I were to boil it down to one single factor, I would have to give most of the credit to my liberal arts education. Had I not gone there, I highly doubt I would have uncovered a love for history, or had the chance to learn and master multiple instruments to symphony-level proficiency. Gone would be the days when I spent easily twice as much time learning about things outside of my field than I did of biology, when the foundation for my modern academic identity was formed. I consider myself a better person for my days there, a better academic, and a more responsible member of our society, and I know I’m not alone. Many people who have received liberal arts educations have had similar life-changing results, yet very rarely have we as academics really sat down to articulate why the liberal arts approach is important. Unfortunately, if we do not make that case, it is very easy for the liberal arts to be attached by policy-makers and students alike; North Carolina is just one example of programs being destroyed because of a poor understanding of their value.

A Matter of Terminology

Sadly, the conversation about liberal arts and the utility of different degree programs is marred to some degree by matters of language. No small amount of opposition to liberal arts programs from the right comes simply from opposition to the word “liberal,” despite that there is no direct relationship between liberal arts and liberalism*. “Liberal arts” is also sometimes used as a synonym for the humanities, rather than as a term for broad, wide-ranging education. This treatment is not entirely unjustified, because the historical liberal arts line up well with the humanities of today, but I prefer that those terms be separate: one refers to a type of education, and the other refers to specific fields of study. Let me also be clear that I am not using “liberal arts” as a synonym for “small liberal arts college;” although the SLAC is a leader in liberal arts education, that type of study can be applied at far more diverse institution types than just the small college. When I refer to the liberal arts education, I am referring specifically to the use of a broad-ranging field of study, usually embodied in the use of a core curriculum that emphasizes coursework in multiple disparate disciplines. That is the type of education whose value I wish to defend.

So why then, do I think a liberal arts education is so important?

The Value of Flexibility

Because a liberal arts education is inherently too broad to endow expertise in any one subject (even one’s own major is just an intellectual appetizer), one of its primary focuses is not subject mastery, but on learning how to think and to learn in the different disciplines. Our ideal “finished product” student is one who has both the drive and the capacity to continually learn a wide variety of things throughout their lives. Furthermore, we hope to help students develop the ability to be adaptive, to be able to quickly grasp the core of a problem, draw on broad knowledge bases to figure out where to start, and to know what tools to use. In my mind, that is far more valuable then giving students technical mastery, because we have no way of knowing what the main tools will be in ten, even five years from now. According to both business owners and educators, those who succeed will be those who can quickly adapt to and learn new tools, and those are exactly the types of people the liberal arts education seeks to create. Unfortunately, even though businesses stress the importance of mental flexibility and adaptability, they continue to criticize liberal arts educations for failing to give students the specific skills they need. This is of course an inconsistency, but it should also be on us as educators to articulate why, in fact, businesses should want our students. There is more to an education than career preparation, of course, but that doesn’t mean we should shy away from the “sell.” Yet intellectual flexibility is just one of the liberal arts’ selling points.

The World Is Interdisciplinary

I have written many times about the skills mismatch, and job training, and I think one thing that is often lost in this whole conversation is how really, really complicated the world is (I doubt that comes as a surprise). Any field who says they have all the answers – including science! – is deluded, particular when discussing the very real and complex problems we face as a society and species. For instance, one of the main things I study is the impacts of environmental degradation, but although I can tell you whatever you need to know about the impacts of environmental damage, that is only part of the story. To truly understand, grasp, and address the problem, we also need to know the cultural and societal forces that lead to it, and the socioeconomic obstacles to change. An environmental scientist can’t really tell you those things, which is why interdisciplinary work is so crucial. A liberal arts education is one of the best ways to be primed for such work, and many of the top schools are even working towards explicitly using interdisciplinary education. Although this approach will not lend expertise in all fields, it is necessary to appreciate the value of other fields and how they complement each other. At the very least, training students to realize that they alone don’t have all the answers to our problems is in itself something crucial in this nuanced world.

Two [or more] Heads are Better than One

One of the things that I found most fascinating about my time at Hendrix was the exposure to the sheer variety of different ways of thinking. Almost every discipline has a different way of viewing and thinking about the world, and my time at Hendrix exposed me to the full range of them (almost…I never took any economics). Although I am first and foremost a scientist and educator, my life is richer and my approach to problems more sophisticated because I’ve been exposed to the thoughts of great minds throughout history. Even on a personal level, the exposure to the great people and works of history has had a formative impact on my life: Beethoven’s music and life have been essential to my struggle to make peace with the steady decline of my hearing. Some might scoff at this more personal value of this aspect of the liberal arts, but I would immediately point out that higher education is broadly about developing new ways of thinking, including how we think about ourselves. I would also argue that there is a more immediate usefulness to this broad exposure: as we move into a more globalized and interconnected world, sensitivity to different cultures and ways of thinking is going to become a more and more crucial element of success. What better way to develop that than by actual, direct exposure?

In conclusion, although I do not think higher education should be justified purely in terms of its economic usefulness, I do think there is a very strong case to be made for the liberal arts in precisely those terms. We live in a world marked by complexity, where many problems do not have a clear answer, and in a globalized economy where our way is not always the way of others. The consensus among both business owners and education experts is that success in the world of tomorrow will be in the hands of those who can think broadly and flexibly. While we struggle to improve and sometimes wholly redefine education, the good news is that we already have a way of training people like that.

Side note: *Although there is no direct relationship between liberal arts and liberalism, there is at least an indirect connection: studies have shown that one key difference between conservatives and liberals is that liberals tend to have more flexible, nuanced, and multi-perspective worldviews. These types of worldviews are precisely the type of thinking that a liberal arts education aims to engender.

How Funding Shifts Can Hurt Students

The narrative of declining state funding in higher education is a pervasive and troubling one. Although it is dangerous to talk in broad generalizations when discussing academia, and there are bright spots of change, as a general rule, state spending on public colleges has declined dramatically. This is especially true in states hard hit by the recession, in which declining tax revenue forced cuts to university budgets (and at a time when it seemed re-education was a major priority). Fortunately, some of the gaps in state funding have been filled at least partially by federal support, especially through the Pell Grant program, but in a day when budget cuts in the hundreds of millions are making headlines, it is not too much of a hyperbole to say that higher education is facing major money difficulties. Of course, increases in costs – often unsustainable ones – have also contributed to the dire financial straits of some schools, for a number of reasons that are in some cases beyond my knowledge (an economist could tell you more about “cost growth disease”). While this whole story about money is complex and part of a much larger conversation, I want to describe why the increasing costs of college and shift from state funding could end up harming the ones we should most protect: students.

As a consequence of declining state support, many universities must make up the gap with increasing tuition and institutional fees. In the case of the University System of Georgia (which includes UGA, Tech, KSU, and all the other 4- and 2-year public colleges in the state), tuition and fees contribute pretty much the same amount to the budget as state appropriations, just shy of 2 billion dollars in each case. Although I am focusing only on USG (in part because it’s my school system), I would be willing to bet that many state systems are in similar situations. What this means, then, is that enrollment is one of the key factors driving the budget of the system; when fewer students enroll, the entire system feels the pinch. This gives the schools a bizarre incentive to enroll as many students as possible, which is a terrible thing for a couple reasons. First, this increase in enrollment typically leads to larger classes, less student-faculty engagement, and usually an increase in contingent teaching labor (none of which are good for students). Second, the importance of tuition income has the potential to force schools to try recruiting students who honestly do not belong  at a particular school, or in college in general. A college education is not for everyone, and certainly a 4-year degree is not what is best for some – if not many – careers. Yet the schools have a vested interest in convincing as many people as possible that a bachelor’s is good for them.

This problem, what I call “degree creep” (that actually may be a legitimate term, I’ve heard it bandied about before), has a number of important ripple effects. First and foremost, recruiting students who honestly do not stand to benefit from a 4-year degree or may not be intellectually prepared for college has the very real potential to needlessly increase the overall student debt load; we have all heard of the stories of students tens if not hundreds of thousands of dollars in debt for a degree they didn’t need. Another important ripple effect is that forcing schools to rely on tuition forces them to try and be everything for all students. I’ve written a little bit on this topic before, but to summarize, the apparent drive to focus on career preparation detracts from the overall mission of a university (research, service, and teaching). The old adage applies: if you try to please everyone, you will please no one; a school simply cannot be everything to everyone, nor should it. One final potential ripple effect that I want to put forward is that the drive to recruit as many students might in fact be contributing to rising college costs.

Students require infrastructure, and infrastructure is not cheap. Even now, USG is spending hundreds of millions of dollars on building renovations and construction, in part simply to better serve larger numbers of students. State appropriations potentially pay for some of that, but it is easy to see how that could be a self-perpetuating issue: schools spend millions to support more students, recruit more to get more money, requiring more infrastructure investment! Furthermore, in order to recruit competitively, schools may find themselves in the position of needing (or – more likely –  thinking they need) to build fancy, luxury facilities to cater to students. How can a school facing severe budget problems justify building such lavish facilities? “To attract students and ultimately bring in revenue.” Treating students like revenue streams also has the pernicious effect of reducing education down to a series of “transactions” rather than a genuine learning experience, but that is a topic for another time. Suffice it to say, no one wins when students are just one more way to financially support the system.

Let me close with the obligatory reminder that every school, and every school system, is different, and much of what I say refers primarily to the large four-year schools such as UGA. I feel very strongly that community colleges and their role have been ignored far too much in the national conversation about higher ed. Unfortunately, as I’ve been forced to say in many previous posts, there are no easy answers to some of the problems higher ed is facing, funding and otherwise. I am absolutely certain, however, that academia on its own cannot provide the solutions. Those can only be found when we as a society grapple with some of the hardest questions about higher education and the type of culture we want to foster. I am hopeful that that conversation will happen soon – education has already become an important campaign issue for 2016 – but when it does, we academics cannot shy away from taking part.

Learning Music to be a Better Teacher

Wanting to become a better teacher? While there are many excellent ways to improve your instruction, I’d like to offer one that may seem a little unusual: learn a new instrument.

I will grant that this seems at first a completely useless bit of advice; after all, what use does a biology instructor have for being able to play piano? On the surface of it, not much, though I’d also like to add that – in my mind at least – playing music is a reward on its own. There is of course a salient argument to be made that learning music makes it easier to make connections across disciplinary lines, and that alone is a huge benefit to being fluent in an instrument. As an example, I can say without a doubt that my love for music makes me a better instructor for my non-majors simply because it gives me another chance to connect with students outside of the course material. I’ve written enough about the importance of those student-teacher connections that I don’t think it needs much repetition here. Instead, I want to argue (and hopefully show) in this post that learning an instrument – more specifically, the process of learning one – provides distinct benefits derived from the insights it gives us into the process of learning.

Learning an Instrument is a Multi-faceted Process

When I first learned violin, it became very clear very quickly (and very noisily) that this is an extremely complicated instrument to master. Violin was the last of five or six instruments I play, so I already had the general knowledge of music down fairly solidly, but the actual mechanics of the instrument were completely mind-boggling. I had to learn how to properly hold the bow, how to hold the violin, proper posture, proper finger placement, and on through a very long list of skills. And of course, not only was each separate skill a challenge, but I also had to learn how they all fit together! And then there was the more musical side: emotions, timbres, dynamics and so on. The same can be said of many instruments, yet to a skilled musician, playing their instrument is simply one fluid act. We just do it, without thinking consciously about it. We can say the same thing about the skills that make up our field; truly doing ecology, for instance, requires knowing the basic foundational knowledge, understanding the extent and limitations of our knowledge, being able to produce and digest scientific research, and to place it in the context of the greater pool of human knowledge. Each of these is a distinct skill, yet to a distinguished scientist, it all makes up one thing. For many of us, our field is likely second nature (we are unconsciously competent), and so learning an instrument can potentially improve our teaching by making us consciously aware of the multi-faceted nature of learning. In a sense, it renews our sense of being incompetent, which can help us design better courses that reflect more aspects of our field than simply the foundational knowledge.

Being a Novice Learner Again

Let’s face it: learning comes naturally to the instructors. We are the embodiment of lifelong learners, and by the very nature of what we did to get there (particularly the dissertation or these process of advanced degrees) learning has become something we can do almost effortlessly. Many of us may have forgotten how extremely frustrating learning something new can be, and I have no doubt this colors not only our expectations for students, but also how we perceive and respond to that frustration when we see it in students. I have seen many a professor respond with “how do you not understand this?!” style shock when instead what we should be doing is offering encouragement and reassuring students they cannot be immediate experts. Learning an instrument is, frankly, a pain in the butt. Look at the long list of skills that went into the violin; the only way to master them is long hours of practice and self-criticism. It’s not a comfortable process, and after 7 years of playing I still have a lot of stuff to work on. Just as learning an instrument can make us more conscious of the multi-faceted nature of learning, it can also make us [painfully] aware of the frustration that comes with breaking down knowledge barriers. It might also highlight areas where our expectations for students are misguided; just as no one is an instant advanced musician, we cannot expect our students to be instant experts in our fields.

The Song of Learning

In many ways, I would argue that learning the totality of a field (i.e. the overarching goals of, say, a whole course) is almost perfectly analogized by the learning of a new song. Although we may spend days or weeks on particular parts of that song/field, it truly only comes into existence when we play the whole thing in context. This also highlights a very important learning strategy called interleaving, where students focus on individual aspects of a skill, interspersed with focusing on the whole context or message. Musicians learning a song do the same thing; our orchestra rehearsal seasons begin with a night where we just play through the entire concert to get a rough feel for the grand scheme of the works; subsequent rehearsals are spent focusing in much more detail on particularly troublesome spots. I’ve been slowly learning the opening to Bach’s violin partita No. 2, and there are times when – although I prefer to play the whole thing – I spend half an hour on one or two measures. It is not much of a stretch to see how this same approach is used in learning, and so going through the process as a novice a musician can potentially give us important insight that we might use to structure and plan a course.

To close, I must point out that the three main benefits above are absolutely not limited to learning an instrument; indeed, I’m fairly confident that most skills sufficiently far from our areas of expertise could yield similar insights. Of course, listening to and playing music has documented cognitive benefits as well, though that’s far outside my ken (perhaps it’s a new skill I should learn!), but I think it can also directly impact our teaching by reminding us what it feels like to be novices. And even if that weren’t true, is not music a worthy end undo itself?

Online Learning: a powerful [and misused] tool

When I think of the changes that have swept through academia in the decade I’ve been in it, it’s difficult for me to think of a bigger one than the advent of online education. I have “come of age” (both literally and figuratively) at the same time as IT forces have allowed large-scale online instruction, and it is now difficult for me to imagine an academic world without online instruction. Of course, distance education is not new, even if it has changed faces, but at no time before has online learning so deeply permeated undergraduate education. Although this can be a force for good, the advent (some would say “tyranny”) of online education has not always been a net positive; I have personally experienced dreadful online courses and found myself cursing their existence. The fact of the matter is that online learning is nothing more or less than a tool – not an end unto itself – and like all tools, the benefits or harms depend on how well it is used.

Despite all the negative press online learning gets, and despite the pushback from those who [usually justifiably] insist on face-to-face learning, online education can serve a very powerful purpose. Many students – particularly those nontraditional ones who are returning to school to find a new career – simply cannot be in a classroom all the time. Time demands from a part-time job or other concerns might make scheduling impossible, and transportation might be unfeasible or unaffordable. Other students might simply not work well in a traditional classroom for behavioral or other personal reasons. For all of these, online education can make the difference between getting an education or not getting one, and in those cases it becomes a hugely beneficial tool. Unfortunately, in practice, online education is not always used primarily as a way to increase accessibility. For many universities, online education can be a very attractive revenue stream at a time when traditional state support is dwindling; this is especially the case if the online courses are “farmed out” to junior faculty or even private companies that dramatically lower the front-end cost. Sadly, many junior faculty members – already a group that may not have specialized training in instruction – might have very little support in developing online courses, and the pitfalls of bad course design are dramatically inflated in online courses compared to their in-face counterparts. That said, online coursework does not have to be necessarily inferior in quality, but to prevent that, we have to know a few major things about designing/executing online courses.

Teaching a good online course is not easy

One reason that many universities might see online learning as an attractive revenue stream is the apparent notion that it can be done at much less cost and effort. Rather than having an expensive tenured faculty member, an online course can be executed by a part-time contingent instructor whose primary job is to develop, curate, and grade materials. Why pay for a full-time, fully supported professor to teach 20 students when you can get the same tuition and spend half the money? That’s a gross oversimplification, I know, but the point is that treating online courses as some sort of junior teaching endeavor ignores the very real and difficult challenges online learning presents. In a traditional class, an instructor has immediate and tangible feedback about how the class is going, and can make immediate changes as needed. Online courses, on the other hand, may be entirely written before hand, and in some cases there may be little to no meaningful interaction with some students. In this case, it takes a very skilled and adept instructor to both design a course that won’t run into problems, and to be able to pick up on scant signals that something might be awry. Although there are many excellent adjunct instructors out there, many may simply not yet have the experience or support necessary to tackle the difficult job of creating a great online course.

Online students are not the same

At the risking of making sweeping and unjustified generalizations, the students who take online courses simply are not the same – on average – as those in traditional courses. They may be non-traditional students just returning to school, they may be students who are simply not comfortable in traditional classrooms, or they may be students who view online courses as an easy way to get the same credits without the time demands. All of these cases present unique and often difficult challenges to an instructor. For instance, returning students might be overwhelmed by being in the classroom again, and may be in need of an intervention to provide the academic and moral support to help them gain confidence. Recognizing and providing that support is not easy, and once again, if online courses are “farmed out”, they may be taught by faculty who for any variety of reasons are not prepared to deal with student issues like that. Even my own immediate teaching supervisor, who is herself an excellent teacher, was surprised by how different the students were in her experimental online lab course, so this is not a trivial challenge. The fact of the matter is that students who sign up for online courses are self-selecting to a degree, and knowing the type of student you will have is a major factor in executing a good online course, one not often considered in the course design process.

Providing rich learning experiences is difficult online

One of the most powerful aspects of a good course – traditional and not – is providing rich learning experiences to students, which lies at the core of active learning strategies. One of the reasons I love teaching labs is because labs are inherently active learning, but lecture time slots can also be used for working real-world problems, designing projects, or any other variety of learning experiences beyond content delivery. When dealing with a course delivered entirely online, this becomes much more difficult. This is not, of course, to say that it is impossible, but in my experience many online courses focus entirely (if not exclusively) on content delivery. This may partially just be a reflection of the belief that content delivery is the sole purpose of any course, but it can become even more harmful in the case of online learning. In a traditional lecture, students at least have the chance to potentially engage with the material through dialogue, questions, etc., but in an online course of the same material, that possibility may easily be lost. Therefore, it becomes even more important to actively consider the learning experiences that will be provided online to students, because if they are not explicitly considered, the course risks falling flat.

There is no substitute for face time

The most salient argument against online education is that it does not allow the same sort of personal interaction and student-teacher relationship traditional classrooms allow. We know that the student-faculty relationship is possibly the most important aspect of a good education, so this is a damning argument. Unfortunately, there is not really a good response except making sure that every student has a chance to interact personally and meaningfully with the instructor. And I don’t mean asking questions and getting answers, but actually engaging in a deeper dialogue that extends beyond the class and allows building that relationship. This simply requires a lot of time, often as much [if not more] time as a traditional course would require, and there’s no way around that if we want an online course to work. Furthermore, as mentioned above, the students in an online course might not even want that student-teacher interaction, or they may (in the case of returning students) be intimidated by the technological component. None of these are truly insurmountable obstacles, but they require a skilled hand to deal with, and overcoming them is the sine qua non of a successful online course.

For all their disadvantages, online courses and online learning are here to stay. This is not necessarily a bad thing, but it is absolutely essential that we as instructors and faculty members are aware of the unique challenges associated with online instruction. I think it is clear that it is not appropriate to simply hand off online coursework to a novice part-time instructor, yet the possibility of saving money by doing so is one of the motivating factors driving university decisions. Online coursework can be very powerful in terms of increasing education accessibility, but that power cannot be realized if the proper investment in skilled instructors is not made. If online course work is treated simply as an additional revenue stream, it will lead to a situation where no one wins: students will lose out and faculty member will see a continued erosion to mere content-delivery vehicles. On the other hand, if done well, online learning is a very powerful tool for inclusion, and so should be pursued as far as possible to that end.

I’d like to close with my own vision of the future of coursework, online and not. The internet is a very powerful engine for finding and accessing content, and so I think the ideal course in the future will be a hybrid of online and traditional. Students will find and absorb content on their own through online delivery, and the limited “lecture” time will be spent on active-learning experiences. I don’t think we’ll ever move to an entirely digital classroom (maybe…holodecks, anyone?), but that doesn’t mean online tools can’t dramatically improve the courses we deliver. IT is a tool, and like all tools, it is only as good as how you use it.

Science Friday: Uncertainty and Climate Change

I’ve decided every Friday (in deference to Ira Flatow’s show of the same name) to dedicate my Friday blog post to issues of the environment, ocean, ecology, conservation, and just generally, science. In other words, what I actually study when I’m not reading/writing about education and policy. This is motivated in part by a nice post that appeared on Inside Higher Ed recently, but is mainly just a chance for me to exercise some of the informal science communication I hope to explore as a career. For this first Science Friday, I’d like to discuss climate change, namely the uncertainty in climate science, where it comes from, and what it means.

One of the most common refrains used by those who disagree with climate scientists is that the “science is just not settled.” These same people tend also to point to the failure of many climate models to accurately predict global sea temperature and surface rise, and the apparent lack of significant warming over the past decade (even though that apparent “slowdown” may not actually be real). Why should we make major policy decisions if we’re not 100% sure about the science behind it? That’s a perfectly reasonable point, but I think it highlights a grave misunderstanding of uncertainty, statistics, and climate patterns that permeates the public discourse about climate change. Although I am not specifically a climate scientist, I am deeply familiar with the issue as a marine scientist, given the intimate connections between the ocean and climate. I am also a numerical modeler, so although I do not work directly with climate models, I have extensive experience with the pitfalls of modeling, and so hope I can elucidate more clearly what we mean when we talk about “uncertainty” in the models.

Let me be very clear about one thing: climate change is happening. There is no scientific doubt about that based on what we currently know. There is also pretty much no scientific doubt that human civilization is a main driver due to the production of greenhouse gases such as carbon dioxide. We have a clear trend, and we have a clear mechanism; that’s actually more than we can say about many natural phenomena (such as gravity, which we can’t explain at all). Conceptually, there is no uncertainty whatsoever about the reality of climate change and the role of humanity. Scientists tend naturally to be equivocal due to the hairy nature of “proof” in empiricism, which has unfortunately helped feed the “uncertainty” narrative, but this is one of those very rare times when there is a consensus. What is less clear is the magnitude of change, its anticipated effects, the speed, and how it fits into the larger context of natural variation. So, the claim that the science is not settled is – from the most technical standpoint – true, but not in the way that it’s usually conveyed.

One of the most powerful tools for predicting large-scale trends (in terms of both space and time) is computer modeling. I use this technique to predict natural processes in an estuary that are nearly impossible to measure, and meteorologists use models continually to make forecasts so you can decide whether to wear a T-shirt or pack an umbrella. Models have changed substantially in recent memory thanks to a dramatic increase in computing power, but at their core they remain essentially the same: we start with a basic conceptual understanding of a system, break it down into mathematical representations, and then use data to parameterize or “fit” a model so we can trust its predictions. So what does this have to do with the climate change conversation, other than knowing how we actually make predictions? I would argue that understanding the uncertainties and pitfalls of modeling is essential to understanding uncertainty in climate science.

If we run a model and the results don’t make sense or don’t fit actual measurements – as is the case with many climate models – there are a few possible reasons. First, it could be that the conceptual model, how we think the system works, is simply wrong; although that is sometimes the explanation, it’s actually the least common source of model uncertainty. Another major issue with modeling is that many of the equations involved don’t actually have solutions; they must be estimated using numerical methods such as Euler’s method (the most basic of many related approaches). Although modern software is typically very good at providing accurate estimates or solutions, it can only do so much with highly complex systems such as climate that involve hundreds (if not thousands) of separate equations. Generally, though, that is a known problem to modelers, and can easily be solved with well-planned methodology. So, assuming the conceptual model is sound and the numerical estimation is accurate, there remains only one major point of uncertainty: the data.

Models are only as good as the data that go into them, and more complex models require astonishingly large amounts of data simply to be able to execute. To use an example from my own research, a model to predict nitrogen reduction (a single value) required a dozen equations and a few dozen variables; many of these are themselves largely unknown! Even assuming the data are in fact known, there is inherent uncertainty with every single measurement (which is what leads to “significant figures” when writing data points); no measuring device is completely precise, and there is always the specter of human error. With very large and complex models, this natural uncertainty can very quickly propagate through a model, so that the final result, even if sound, is still inherently uncertain. Quite simply, there is no way to deal with that; it is just the nature of empirical measurements. Of course, as data collection methods improve, that uncertainty diminishes, but it will never be eliminated.

How does then relate to uncertainty in climate science? Well, in addition to the natural uncertainty inherent in models, climate models are faced with the daunting prospect of relying on massive amounts of extremely noisy data. Even though temperature graphs show a clear upward trend, there can be as much variation within a year as there is between years, which makes it nearly impossible to define statistically significant trends; it’s not that the pattern isn’t there, it’s that it ends up buried in noise. There are ways to deal with this, such as something called a Fourier transform, which can be used to eliminate noise from a signal, but unfortunately such statistical “tricks” feed into an untrue notion that scientists are faking data when exposed to the broader public. Furthermore, climate scientists are dealing with data that are continually being revisited and improved; just yesterday a new paper came out showing that the “hiatus” in warming is quite possibly because of bad data in the past. Related to this is the fact that we are still learning about how the climate works in detail; keep in mind that global temperatures from satellites are only a few decades old! There are a number of natural oscillations which exist on those same time scales (such as the North Atlantic decadal oscillation currently swinging into place), so it’s entirely possible that there are still some climate drivers that we don’t yet know about. So yes, there is a great deal of uncertainty in climate science, but that’s not to say there is uncertainty about the reality of climate change.

To close, I must to be intellectually honest entertain the possibility that we are wrong about climate change. It’s an extremely remote possibility, but the nature of science keeps us from ever truly eliminating it. I must point out, however, that the degree of certainty about climate change far exceeds the certainty of knowledge with which, say, we make medical decisions or individual diet choices. Furthermore, carbon dioxide does more than just warm the planet; a substantial portion of the oceanic ecosystem is reliant on organisms who produce calcium carbonate shells or skeletons. We know that carbon dioxide dissolves in the ocean and lowers pH, and we’ve already been able to measure ocean acidification. The economic consequences of acidification could be extreme as entire fisheries collapse, and so that alone should be sufficient reason to worry about carbon dioxide, regardless of the warming effect. In the end, even if our power to correct climate change is limited, there’s not really a valid excuse for failing to do everything we can when we know the consequences.

Academia: not always about teaching?

What does a professor do? Are they teachers?

If you ask a typical member of the public to describe a stereotypical university professor, I would be willing to bet that the description invariably includes at least some mention of being in a classroom and working with students. If you look at the typical discourse, e.g. in Wisconsin or – especially so – in North Carolina, it seems that the public perceives the professor as only a teacher. After all, why would the North Carolina legislature have deemed it appropriate to try and legislate a mandatory 4-4 year (that’s 4 courses per semester, if you’re not familiar with the lingo)? Certainly, teaching is a major component of a professor’s life, without question, but the public might be surprised to know that, in fact, it is research that is the professor’s primary role. The sine qua non of an academic life is publications, and with very few exceptions, tenure cannot be obtained without evidence of an active scholarly record. Indeed, I would argue one of the biggest misconceptions permeating the national conversation is that universities (particularly R1 schools like UGA) exist primarily to teach. That’s simply not the case; UGA’s mission statement – like that of many schools – explicitly includes research, education, and service. Many people might not even be aware that, as a R1 institution, research takes precedence here! That, however, is not what I want to write about today; it is no secret that the national conversation is egregiously bad at lumping together and oversimplifying the roles of different schools. What I want to talk about is the role of teachers, people like myself who have always wanted first and foremost to be in the classroom, and why we have such difficulty finding a place in academia.

One of the many reasons I am exploring careers outside of the Ivory Tower is that my research, to be blunt, is not sexy and exciting; it’s not the type of ground-breaking research that lands tenure-track jobs. I would argue that it’s very, very important for our future, but the fact is that coastal ecology is a relatively small field and it doesn’t garner the attention (i.e. grant money) that biotech or genetics does. As a professional, however, I personally consider my research second string to my teaching expertise. I have substantially more teaching experience than someone at my stage would normally have (ten semesters compared to the single semester required for my degree), and I have actively pursued formal training in the theory of teaching and learning. This makes me extremely unusual – some PhDs graduate without having ever taught a course (depending on the program requirements) – but that is not necessarily a good thing for traditional academia. The fact of the matter is that a PhD is about training scholars, not educators, which is one reason that a number of damaging and misinformed beliefs still permeate the college classroom. Typically, the belief has been that expertise was all that was required to make a good teacher, a belief which has been slow – entirely too slowly, in my mind – changing towards the belief I enshrine in my teaching philosophy.

Teaching is in and of itself a rigorous intellectual act requiring extensive preparation and scholarship separate from the mastery of the subject material.

In my opinion, this is one of the biggest failings of the way we train and recruit faculty members. The many excellent teachers out there exist in spite of, not necessarily because of, the graduate school system. This is, thankfully, changing: UGA requires its PhD students to take at least a one-semester course on teaching, usually concurrent with at least one semester of actually teaching as well. Attitudes, however, are much more stubborn, and even though winds of change are blowing, academia is notorious for being slow to make shifts. Even as we grapple with the increasing importance of education, research still remains at the center of a traditional, tenure-track career. At the same time, the ever-increasing teaching load is “farmed out” to a continually growing pool of oft-exploited adjunct instructors. What this means is that people like me, who consider themselves educators first and take the time to dive into and master the scholarship of teaching and learning, find it very difficult to find a place in traditional academia, particularly one where we can make a reasonable living. But before I let it all seem gloomy, there are bright spots: more and more schools are realizing that teaching and expertise are distinct masteries, and creating positions specifically for those with teaching expertise. Overall, this is a great thing, and hopefully will continue, but sadly, these types of positions are still few and far between. What we really need is a shift to put education alongside research on a pedestal of respect.

Changing attitudes is, to put it mildly, not easy, and changing the policies that express that attitudes is even harder. One of the best ways to make a place in academia for teaching specialists would be to invite them to the tenure track. Some might argue that teachers have no need for tenure, because tenure’s main role is to preserve academic freedom for professors who might engage in controversial but crucial research. Teachers, excepting perhaps those teaching the most controversial courses (which would likely be the domain of tenured professors anyway), don’t need those same sorts of protections, the argument goes. I think this is woefully wrong. Denying the same sorts of protections to teachers potentially puts them at the mercy of feedback from students who may vent frustration on course evals, and I’ve written before why that might be problematic. Let’s face it: some of the best teaching requires a lot of work and intellectual discomfort from students, and having tenure-like protections would encourage educators to engage students in those meaningful conversations without fear of losing a job over being “too tough.” Of course, giving tenure on the basis of a teaching record also has the added benefit of increasing a teacher’s investment in his/her school and students, for the benefit of everyone involved in the process. I will grant that actually measuring teaching effectiveness could prove substantially more challenging than measuring scholarship, but I would argue that the potential benefits far outweigh the challenges.

To close, let me point out that we already have a whole class of schools that has long ago tackled the teaching/scholarship divide: the small liberal arts school. While tenure at these schools still does involve research, and I think we should push for a career path focusing entirely on the scholarship of teaching and learning. These schools long ago realized that the divide between scholarship and education was at best flimsy, and I think that’s just one more thing larger state schools can learn from the smaller brethren as they face mounting pressure to refocus on education.

On Grades, Pt. II: Inflation

Grade Inflation. To some, it is a necessary evil, to others it is a sign of the decline in higher education, and to yet others it is a problem that either doesn’t exist or is blown entirely out of proportion. So who is right? Is grade inflation an unfortunate necessity, a pernicious decline, or a non-existent overblown problem?

Yes.

As with most aspects of academia, grading and grade inflation is an extremely complex issue, with multiple pressures from multiple forces. Although I have written before about grading and assessment, I have yet to really opine on the actual numerical grades. Although I had not originally planned to do so, an interesting article on Inside Higher Ed (and the articles therein) got me thinking over my writing hiatus. Grade inflation is not a new problem, of course; when I was at Hendrix, the GPA cutoff for summa cum laude was (it seems) extremely high (3.95), at least in part to combat the effects of grade inflation. I will admit, as a young science major I was always a little confused about the idea of inflation, because I was used to a world dominated by objectivity. In my world, your grade was objectively based on how well you understood the material, nothing more, nothing less. I realize now that this view was not only overly simplistic (there is without a doubt a subjective component to judging mastery even in science), but it also ignored the difficulty of assigning grades in fields which are intrinsically much more subjective yet just as important, like philosophy or composition. I will admit I personally do think grade inflation is less pernicious a demon than it’s been made out to be, but in recent years there is no denying that it has become – at least in the mainstream – a major issue, though like many major issues it is often not treated with the nuance necessary.

What can cause grade inflation? I don’t think there is any one driver in an overall increase in good grades, and I would be willing to bet that it also differs across schools. Although it is almost certainly impossible to entirely “diagnose” the rise in grades – and just as certainly the diagnosis is different for every school – I would argue that addressing grade inflation is not possible without having a good understanding of what drives it. For instance, if grades are inflating because professors are lowering standards, the solution is to restore or even raise those standards; on the other hand, if grades are rising because content is no longer challenging, then the solution would be to update the content (rather than change the actual grading). It’s also important to consider that much of the grade inflation media attention has largely focused on and come from the elite schools: Ivy League universities and similar institutions; while I have yet to be convinced that the education at those schools really differs from “non-elite” schools, I think we also have to consider very important distinctions in the student body when expanding the grade inflation conversation across the nation.

One of the proposed sources of grade inflation is that students are simply coming into school better prepared. This is certainly a sensible argument, but it also flies in the face of the national conversation about the quality of our schools. While I will definitely grant that the conversation about K-12 education is badly overgeneralized, the argument of increasing student quality seems to disagree also with my own experience. Although my students can learn quickly when pushed, I would not go so far as to say high schools have prepared them well for what I teach in terms of foundational knowledge. Dr. Mark McPeek at Dartmouth seems to feel the same way, but I think it’s worth noting that national trends probably play out very differently at an elite school. I have every reason to suspect that the pool of applicants to places like Dartmouth, Harvard and other elite schools are in the top percentage points of students; even if nationally, we have not seen much change in overall high school student capacity, the elite schools might indeed be seeing an increase of competency if their applicants are an increasingly rarefied portion of the overall student body. In that case, it becomes very plausible that in the upper crust schools, increasing grades are indeed driven by increasing competency; it’s not that these schools are failing to distinguish between good, intermediate, and poor students, but rather that their students fall into only one class. The solution, of course, is to increase the rigor of the curriculum, if indeed that distinction is desirable in the first place (a separate conversation, I think). I’ve said time and again that the changes in information availability have freed academia up to teach higher skills instead of content, and this would be a prime example, if indeed increasing grades are driven by increasing competency.

Another proposed source of grade inflation is reduced rigor of the academic curriculum. Figuring this one out, of course, becomes very challenging due to the sheer variety of curricula, but I can at least say from my own experience that in some measures, curricula are getting less rigorous. However, I must immediately follow that up by saying this decrease in rigor comes in the form of reduced content. Because this allows us to instead focus on deep learning and skill development, I would argue that this is not actually a bad thing. Indeed, as I have been revising my own course to reduce content and emphasize higher-order skills, I have seen a dramatic increase in overall grades; I have also seen a dramatic increase in true competency (as opposed to the ability to regurgitate information). Some might see the reduction in overall content as a decrease in the rigor of my curriculum, in which case the obvious solution would be to add more content back in, but I would argue very strongly that this would do more harm to students than good. This is not, of course, to imply that we should never upgrade the rigor of our curricula, but that we must be very careful by what we mean with “rigorous.”

The final possible source (that I will discuss here) of grade inflation is the drive to increase the post-graduate career of our students, particularly admission to professional (law, dental, medical, etc.) school. To keep our students competitive, the argument goes, we have to give them the best grades possible. This is a particularly troublesome argument because, despite some counter-arguments, there’s at least a small bit of truth to it. I will grant that I am going mainly from experience here (my father is on faculty at the University of Arkansas for Medical Sciences, UAMS, and my brother and many friends currently or formerly attended), but the fact of the matter is that grades do indeed play some role in professional school acceptance. They are nowhere near as important as the tests (MCAT, LCAT, etc), but lower grades could potentially harm a students’ admission chances (assuming the MCAT scores, which reflect the education as well, do not change; the combination of mediocre grades and high test scores raises suspicion). This fact raises a major concern for faculty. Although some, such as Dr. McPeek, argue that our role is simply to give the best education possible, we also want our students to succeed. Furthermore, for better or worse, acceptance rates into professional school are already used as an informal benchmark to compare schools; whether we want it or not, I can easily imagine the acceptance rates of our own students being used to formally evaluate faculty in the near future. Unfortunately, if this is a main driver of grade inflation, there is no good solution; as with the issue of job training, I think the only step forward is a meaningful inclusion of the professional schools in the conversation.

To close, I want to add some thoughts on why grade inflation is a part of a much larger conversation on the role of higher education, because the importance of grades’ upward slide might change depending on the role the university holds. If the university exists only to prepare students for the job market, and grades accurately reflect competency, then everyone getting A’s isn’t a problem at all; it simply means that all the students are competent. If expected competencies change, of course, then the rigor of the curriculum and/or grading will change, but other than that, “grade inflation” simply becomes “increasing competence.” On the other hand, if you – as I do – consider the university to hold a higher role, of creating better citizens, people, and lifelong learners, then grade inflation is indeed a major issue, regardless of its source. If grades are sliding upwards because students are not challenged enough, then this is a missed opportunity to take the national collective knowledge to the next plane; if grades are sliding because standards are dropping, then it devalues the degree and robs us of the chance to keep the nation on the cutting edge. This is just one more reason we need to have that conversation.