Deprecated: Creation of dynamic property quick_page_post_reds::$ppr_metaurl is deprecated in /var/www/wordpress/wp-content/plugins/quick-pagepost-redirect-plugin/page_post_redirect_plugin.php on line 97

Deprecated: Creation of dynamic property quick_page_post_reds::$pprshowcols is deprecated in /var/www/wordpress/wp-content/plugins/quick-pagepost-redirect-plugin/page_post_redirect_plugin.php on line 99

Deprecated: Creation of dynamic property quick_page_post_reds::$ppr_newwindow is deprecated in /var/www/wordpress/wp-content/plugins/quick-pagepost-redirect-plugin/page_post_redirect_plugin.php on line 1531
Assessment structure in Data Science and Innovation - LX at UTS

How do we connect our student’s areas of developing professional expertise with the tasks we set them? That’s the question we’ve grappled with in the Masters in Data Science and Innovation, where our students are often returning to studies after gaining professional experience, and at times changing professions. In particular, we’ve aimed to support our diverse intake of students, who come from multiple disciplinary and professional contexts, and may not have coding backgrounds. We design the subject to support students’ success in the course, and the job market, through effective staggered assessment structure and design. This is in part a response to the call from some students for a greater focus in the first subject (Data Science for Innovation, which I teach) on ‘technical skills’. That led us to consider how to build these skills alongside professionally relevant writing and engagement with scholarship.

We gratefully acknowledge the support of a 2019 FFYE grant, funded through DVC (E&S).

Assessment structure

Over a number of sessions the subject has developed to create an assessment structure that:

  1. Makes clear the purposes of the assessments, and particularly their written focus
  2. Is professionally relevant, using the genres and tools of the practice
  3. Provides formative feedback and opportunities for improvement

Underpinning all of this is an assessment guide that explains these three features for each of the assignments, and makes the links between the weekly content and the assessments very explicit. A version of the assessment guide is publicly available.

Why do we have to write?

In the excellent book Teaching academic writing the authors suggest an activity to help students explore the purpose of writing (Coffin et al., 2005) asking them to:

  1. Individually consider types of writing they have done in the preceding period
  2. Categorise these types in small groups (perhaps with some “suggested functional classifications such as memory aids, social communication, learning about x, for assessment”)
  3. In a whole class discussion, narrow the focus to academic writing activities and the purposes of these

In my teaching, I mimicked this activity, asking students to consider the kinds of writing that data scientists might engage in, and (1) what makes them more or less ‘data sciencey’, and (2) what makes them more or less scholarly (see Thinking about genre with students). This exercise had two key aims. First, to flag the role of scholarly literature, both for credibility, and currency, alongside use of high quality grey literature, such as industry reports. Second, these discussions fit into a wider conversation about the ways in which we use data and other forms of evidence, including scholarly literature, as warrants for our claims while building an argument. For example, the first assignment is to write a discussion paper identifying opportunities for innovation in a sector; we flag why evidence is important in this, in order to identify clear needs, avoid opportunity costs, and understand the context.

Professionally situated 

Analysis we conducted in 2016 of models of writing and the UTS model of learning showed the importance of situating writing within the disciplinary context, but with a significant gap in the literature around the multimodal nature of much professional writing. Data scientists don’t only write in Word, they also ‘write’ through blog posts, RMarkdown and other ‘code based’ reports, Shiny apps and other dashboards, and formal use of templates that – for example – assess and manage risk.

In addition, student feedback made it apparent that we needed to make very clear the requirement for students to develop their skills not only in conducting analysis, but in communicating its meaning (and deficits). Data science writing is about communicating about and with data science. By that I mean, it is about data science in the sense that they must be able to communicate the methods and their implications and the connection to meaning, and with data science in the sense that they must be able to apply technical skill to produce analyses and visualisations that provide evidence with a suitable degree of (un)certainty. This approach aligns with the literature on approaches to writing, which distinguish between “learning to write” (i.e., a relatively surface approach to developing skills and knowledge), and “writing to learn” (i.e., recognising the significance of writing as a practice and feature of learning more broadly).

When we set out on the work, we had three written assignments, produced in a word processor, with one described as an essay. This assessment structure has been gradually revised.

From essay to public ‘discussion paper’

Firstly, we went from what was called an ‘essay’ to re-framing the task as an industry discussion paper, with a new session on writing introduced in the subject to support students in developing their communication skills. Our new assessment structure encourages students to publish these pieces via Medium. This allows them to build their professional portfolio and engages them with the professional practice of public writing.

From Word report and case study to RMarkdown and ethics canvas

In 2016 we had two assignments both submitted as Word documents, one in the form of a scientific report, and the other a case study analysis. From 2019, these were combined into a single report which addressed the same criteria, but are written in the professionally relevant tool RStudio. This provides an onboarding approach for students unfamiliar with coding environments to begin to write in this context, and supports students already familiar with coding to use these tools alongside their writing activity. Some students choose to conduct analysis in the environment, while for others it offers an opportunity to gain familiarity with the tool and approach to writing in it. In addition, they now submit an ‘ethics canvas’ using an example resource from professional practice (a modified Open Data Institute canvas), as an appendix, applying this to their own work. 

These improvements have led to steady SFS improvements, with students saying:

There is clear explanation of why we are focusing on things such as writing and communication rather than just technical data skills and this is a great approach.

(SFS 4.3 Spr 2018)

Opportunities for improvement and formative feedback

Finally, we know that students learn from exemplars, and from applying assessment criteria to these exemplars (a process called benchmarking, the impact of which research I’ve led at UTS – Knight, Leigh, Davila, Martin, & Krix, 2019 – has demonstrated, more in my post How do we know if feedback is working?). In early work applying this in DSI, students said:

I really appreciate the focus [on]… getting students to assess past students’ work and assess their own assignment submissions so that we have a better understanding of how we are marked against the learning criteria and makes students focus on the learning criteria instead of just doing the assignment or readings.

(SFS 5.15, Aut 2016)

In DSI, we have introduced a range of opportunities for engagement with exemplars, sample structures, and past student work, including:

  1. In their first assignment, a discussion paper, we provide authentic examples (from professional contexts), student examples on Medium, and old graded examples which they are asked to grade themselves (a benchmarking task). Students are then provided with our own assessments of them (alongside all of our old feedback).
  2. In their second assignment, a report: (1) exemplars are made available for review, including old graded examples, (2) students submit an early draft, including the ethics canvas, and their analysis for peer review, (3) students peer review, using a ‘comment bank’ of common issues I have identified over earlier semesters (students who do not do this task have a cap applied to their final mark), and (4) students then submit 2 weeks later, having hopefully engaged with the criteria and common issues from both a peer assessor and assessee perspective.

References

Coffin, C., Curry, M. J., Goodman, S., Hewings, A., Lillis, T., & Swann, J. (2005). Teaching academic writing: A toolkit for higher education. Routledge. Retrieved from Google Books.
Knight, S., Leigh, A., Davila, Y., C. ., Martin, L., J. ., & Krix, D., W. . (forthcoming). Calibrating Assessment Literacy Through Benchmarking Tasks. Assessment & Evaluation in Higher Education. https://doi.org/10.1080/02602938.2019.1570483
OECD, & Statistics Canada. (2000). Literacy in the Information Age – Final Report of the International Adult Literacy Survey. OECD. Retrieved from http://www.oecd.org/edu/skills-beyond-school/41529765.pdf
National Commission On Writing. (2003). Report of the National Commission on Writing in America’s Schools and Colleges: The Neglected “R,” The Need for a Writing Revolution. College Board. Retrieved from http://www.collegeboard.com/prod_downloads/writingcom/neglectedr.pdf
Barton, D., & Hamilton, M. (1998). Understanding literacy as social practice. Local Literacies: Reading and Writing in One Community, 3–22.
Gee, J. P. (1996). The New Literacy Studies: Sociocultural approaches to language and literacy. Social Linguistics and Social Literacies: Ideology and Discourse, 46–65.
Hyland, K. (2002). 6. Genre: Language, Context, and Literacy. Annual Review of Applied Linguistics, 22, 113–135. Retrieved from http://journals.cambridge.org/abstract_S0267190502000065
Lea, M. R., & Street, B. V. (2006). The” academic literacies” model: Theory and applications. Theory into Practice, 45(4), 368–377. Retrieved from http://www.tandfonline.com/doi/abs/10.1207/s15430421tip4504_11Hyland, K. (2013). Writing in the university: education, knowledge and reputation. Language Teaching, 46(01), 53–70. Retrieved from http://journals.cambridge.org/abstract_S0261444811000036

Feature image by Toby Burrows.

Join the discussion