Seeing Data

Part of what makes what I do so interesting is the interconnectedness of the moving parts of the organization. My group, Teaching and Learning with Technology (TLT), has many moving parts and a handful of groups within it that all do different, yet similar things. Our Classroom and Lab Computing (CLC) team focuses time on our physical spaces and the infrastructure that supports them. Education Technology Services (ETS) is a different type of organization in that they not only evaluate, test, and recommend technology solutions for faculty, staff, and students but they also look to unpack the affordances of those technologies. ITS Training Services spends most of their time designing and delivering training to faculty, staff, and students but also manage big projects that deliver all sorts of other services. WebLion is a team that focuses intensely on the overall processes inherent in the design, development, and deployment of large organizational websites. What amazes me is that each of these groups are all part of a larger value chain of sorts that serves our University very well.

They also all produce lots and lots of data. Some of that data comes from the services that we offer, while a whole bunch of it comes from the questions we ask of our audiences. A couple of examples of service level data might be stuff that our Adobe Connect service collects for us, or the data we get each time a student prints in one of our labs, or each time an application is launched. With this kind of data we don’t know what happens in an Adobe Connect meeting, or know what the application does once launched, or the contents of the pages printed (we’d need to do a different kind of analysis to get at that) but we do get clues that help us ask better questions that presumably help us make better decisions. From where I sit the days of, “I betcha …” planning are long gone.

To this end we’ve embarked on trying to make sense of the data we have in new ways — visual ways. A small team in TLT has been working at using the Roambi platform to do just that, visualize otherwise flat data tables to help us make better sense of what is being gathered. The unfortunate thing about this blog post is that you can’t get a sense of what it is like to not only see your data, but to be able to touch it and manipulate it in real time. The screenshot below is a simple representation of the percentage of overall pages printed during the 2011 academic year (Spring, Summer, and Fall) from the College of the Liberal Arts faculty, staff, and students using our managed lab environments.

Printing Data

Now taken by itself this is an interesting piece of information that lets you see that we print quite a bit still in total and that the College of the Liberal Arts prints about 15% of the total number of pages here. But if you take that data and mash it up with our ability to look at it from the departmental level you can begin to make sense of how to address it as a problem to be solved. Printing is expensive and while we do all sorts of really great things to be eco-friendly, it isn’t the best thing for the environment. When you look at the drill down at the departmental level you can pinpoint specific programs that print much more than others. When you do that you can work at a level where some sort of intervention can be applied.

This is where the organizational integration starts to really become powerful. Having the printing data and the departmental data mashed together I can sit down with the unit level directors and begin to construct a strategy to change the overall behavior in a positive way. I can now work with instructional designers in other parts of TLT to construct a workshop for faculty on digital assessment strategies, create learning opportunities for students to understand how technology can be used in the writing process to eliminate passing drafts around, and look at new software that enables new workflows between those audiences. Then we can easily measure the pre and post states to see if our intervention might be working.

Another example that happened not too long ago … we had a meeting to discuss changes with one of our smaller public labs on campus. Essentially we were asked what the impact of moving this small lab might have on students. This happens all the time — construction needs to happen for all sorts of reasons and typically they are good reasons. Well the meeting started and I was able to quickly show just how intensely popular this out of the way location really is. Needless to say the person we were meeting with was blown away and offered new space to better meet the needs of students. It is hard to pack this much information into a readable spreadsheet. The visualization below shows all sorts data represented in an easy to read format … I snapped a single day that represents 3,917 unique userids that logged into the machines in this small out of the way location. By having this kind of data available and readable we can instantly see trends in use and share that in a meaningful way.

Lab Data

We are just at the beginning stages of this approach and haven’t figured all of the details out. But as we go forward we know that using our data in this way could be truly part of a transformative way to get at the future states of our services, our spaces, and the ways we plan for them.