I just finished reading Education unbound: the promise and practice of greenfield schooling, by Rick Hess. I think Rick is really smart; I always try to go to conference sessions where he is presenting. When I listen to Rick, I invariably learn a different way of thinking about an issue. Three quotes from this book really stood out.
On the relationship between for profit companies and educators:
“[T]he federal government spends 100 times as much on medical research as it does on educational research… [A]lthough pharmaceutical and medical device firms ‘like to present themselves as engines of innovation and discovery, it turns out that the health sciences R&D climate in the United States — and most of the breakthroughs — depend largely on government funding of innovation through the National Institutes of Health and at universities.’ … When it comes to taking research and turning it into something useful, though, it is commercial providers — not researchers or government officials — who typically have the requisite incentive, capacity, and tools… Unfortunately, although these for-profit firms are best equipped to develop research into something useful, researchers and practitioners [in education] are often uncomfortable working with them and thus keep them at arm’s length. Meanwhile, these working relationships are commonplace in R&D-intensive sectors like aerospace and biotech (p. 38-39).”
I found this passage significant because I certainly have a tendency to keep for-profit firms at arm’s length. Rick’s point that there are some things that for-profits do better than other groups was new for me in an education context. Textbook publishers seem to be a counter example to this point, but software products like aleks.com and Fast ForWord are examples of software packages that I find interesting that as I understand were first developed by researchers and then later turned into commercial products. I vow to be more open minded about for-profit companies [Note to sales people — this is not a request for you to contact me with “a way you can really help me out”].
On enterpreneurial leadership and educators:
“[Y]oung teachers generally work alone in their classrooms, have few school-related responsibilities outside their own classrooms, and develop professional networks restricted to fellow teachers. As a result, a teacher’s circle of contacts is often limited to his or her peers and provides little opportunity for the kind of professional development more likely to nurture entrepreneurial leaders (p. 42).”
This made me think of an experience I had speaking to first year Teach for America teachers. I was surprised and a bit disappointed that TFA wanted their new teachers to listen to a bunch of people who were far removed from the classroom (I was the sole educational practitioner and hadn’t been a full time teacher in years). I saw it as a weakness that they didn’t value their new teachers learning about how to get better at teaching. I still find that to be a concern, but I see that TFA wanted its new teachers to be exposed to philanthropists, business people, lawyers, and other non-teachers. I can see that since TFA has the goal of getting young people enthusiastic about working in and for education, whether as a teacher, a lobbyist, a politician, or an entrepreneur, building a professional network from a variety of disciplines is an important goal. It makes me wonder about the role of developing professional networks within the HTH Graduate School of Education. I could imagine, for example, a two day Educational Entrepreneurs conference for graduate students in education, law, business, and other fields who all have an interest in improving educational opportunities for students, whether from inside or outside the classroom.
On a variety of kinds of data for measuring schools:
“Consider the ‘independent reviewer’ model, in which third party providers establish a business based on evaluating providers — as with Fiske’s college guides, RottenTomatoes.com’s movie reviews, or Consumer Report’s comparison of laptops. Some such models rely on expert reviews, others on the experiences and opinions of consumers, and others incorporate data, lab tests, or formal comparisons. These models have great promise in education to provide useful metrics and equip leaders to make a variety of distinctions on cost and quality (p. 71).”
I am frustrated that talk within the “education reformer” camp is limited to two positions: either you support reducing all of education to a score on a multiple choice test or you don’t care about the kids and are opposed to reform. I am pro data, pro evidence based decision making, pro accountability, and pro outcome based measures. I am opposed to thinking that multiple choice tests are all that matter. I have struggled with articulating policy ideas that capture all of these sentiments. Saying you are opposed to multiple choice testing as the sole measure of results is not enough. I believe deeply in looking at student work, through presentations, exhibitions and other means. Larry, Rob, and I have discussed the “silver bullet” metric: look at the students (in a school, a district, a state) who qualify for free and reduced lunch in the 9th grade and then what percentage of them graduate from college? That’s the subject of another post. What I like about Rick’s point is that it expanded for me the range of possible kinds of data. I could imagine a website that reviews various math software packages, complete with teacher, student, and parent comments. I could imagine a Fiske’s college guide report on high schools. Rick’s idea here helps me see that in other sectors, people grapple with more sophisticated data than merely a score on a multiple choice test, so it should be possible to do this in education too.
Thank you, Rick.