Evaluations – what we have learned

This week we returned to project management, and arguably one of the most important parts of the project management (public history) process – evaluating our success (or lack thereof). All of the articles asked some great questions, and stressed a few key things. But before I get to that, allow me to follow a (very) quick tangent. I have often been accused of “asking all the hard questions”. Usually at work, when someone or multiple someones are attempting to divine the silver bullet solution to some inordinately complex problem, I am the one sitting in the corner asking the hard questions. I’m the one who wants to pull the issue apart further, delve into different aspects of the question, or start from scratch to better understand what the problem really is and what we are trying to accomplish. Believe me, this has usually been met with frustration and unhappiness more often than open-mindedness and accolades.

For those of you who read Preskill’s article, you can already see where I’m going. I really enjoyed her ideas, especially her four “imperatives”. She echoes Stephen Covey: “start with the end in mind” (her first imperative). Before we even begin a project, we must think about the end. What do we want the public to get from our project? If we concur with Preskill when she quotes Weil in that “the ultimate goal of a museum was to improve people’s lives”, then we need to think about that “ultimate goal”.

We also need to do so in concrete terms. Preskill challenges us throughout the article, stating that “people can easily articulate in general terms what they want to achieve overall”, but that they tend to stay at the “30,000 foot level” and have a difficult time talking specifics. This is one of my most frustrating moments in meetings. We define success in broad, undefinable terms. We talk about wanting people to “engage”, to “learn” or to “enjoy” their experience. Far too nebulous. If we are going to be able to gauge success, then we need to get specific. Really specific.

To do this, we must have “courageous conversations” (her fourth imperative). She puts it out there – this is not easy to do. This is not easy to collaborate on, or to agree on. Any team (any one worth its salt, anyway) will come together with a wide array of personalities, unstated goals for the project and idiosyncrasies that will only emerge as people begin to work together. But here are where the best ideas come from; here is where the best projects emerge. Working together, we can accomplish more than we thought possible and we can develop ideas that could not have been envisioned as individuals. But to do this, we must approach evaluations as an “affirmative data collection” process (her third imperative). Evaluations are not inherently negative. They only become negative when we think about them in those terms. Evaluations are not necessarily a judgment on all the things that did not go right with the project; they are just a way of discovering what might work better.

This semester has taught me a lot – not only about digital history, but also about public history. It has taught me that we must constantly be thinking about what we are doing, who we are doing it for and why we are doing it. It has taught me that the project management skills used elsewhere are widely applicable, and very useful in creating and maintaining digital public history websites. It has also taught me that it is incumbent upon us as the historians to ask the hard questions, to think critically about how we want to engage our audiences. Open-mindedness and hard work are critical to success. Oh yeah, and start with the end in mind.

2 comments

  1. As always, a most excellent post!

    Evaluation was one of the key things really stressed at Museums and the Web this year–and some evaluators themselves were caught off-guard by that, in a happy way. One tweet summed it up best: “Theme: Making data driven design decisions is so much easier and less risky than just making design decisions #mw2014”

    In the project I’m working on for my job, having an evaluator involved from the beginning has really made all the difference. We’re not even deciding on our platform until we’ve conducted audience research, so that whatever platform we choose will be chosen based on use. This has gotten some consternation from those not used to operating that way, but will lead to a much better-informed decision, and thus more-used site.

    Again, great post! Every group needs someone like you who will ask the tough questions that need to be asked.

    • Thank you! I’m so glad you’ve enjoyed my posts. I realize they are a bit irreverent at times, but hope that makes them more fun and more engaging.
      Thank you also for sharing your experience from work. Having an evaluator involved from the very beginning is a great step, and shows that there is a strong leader somewhere in the organization. I can empathize with the challenges and negativity sometimes felt by doing things that at first appear harder. But in the long run, you’re right, you’ll have a much better project. Just don’t expect the opponents of the idea to see that the reason you are so successful is all the hard work upfront. They will probably still think they could have accomplished the same things with less effort. 🙂 It’s a constant battle.
      But, I’m very glad to see these ideas getting so much coverage – perhaps we are making progress!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s