Skip to content
Joris van Zundert edited this page Dec 15, 2016 · 3 revisions

TODO: We want to infer a 'standard' set of practices when defactoring

  • We chose 1 execution path, 1 namespace
  • To do this we broke up the code into various pieces that expressed that one execution path
  • We inspected and explicitly reported the values as a form of reporting 'state' of the execution
  • We DEFACTORED (refactored for single namespace execution) the code at places

(NOTE: We treated these in the current drafty introduction, but it would be good to recap them as a more formal and theory framed recommendation for practice?)

TODO: Reflexive analysis

  • Do more reflexive (vs. descriptive) analysis of the code in this notebook.
  • we should keep in mind, this analysis goes in three directions
    1. DEFACTORING as a method of code criticism (connects to 1, but situated in a broader intellectual context) [WE SHOULD PRE-PUBLISH AT THIS POINT]
    2. textual scholarship and critical editing, code as text
    3. STS: what do we see in the Underwood & Seller's work with respect to methodological changes and modes of knowledge production (computational narratives). Especially considering the selection of features (most_common vs. binomial_select) as a reflection on Underwood & Seller's understanding of their audience

Fragments by Underwood/Sellers that could be focal points for reflexive analysis of the code.

TODO: house cleaning

  • Clean out the github repostior (removing extraneous code) and write a new README for this specific project
    • circulate the repository and notebook, soliciting feedback and suggestions for publication venues
    • generate a PDF version of the notebook

Notes from 2016_06_14 Skype

Some things that didn't fit too well into the Timeline below

  • We want to exercise full Open Notebook Science, which means including notes on conversations/development here so that they are retrievable from commits.
  • To stress the Open Science approach we want to tag certain stages of the notebook for people to see how we iterated to our method for defactoring. (Version 0.1 prob. on 15 June 2016.)
  • Diagram of the execution path
  • define terminology (step over, step into, etc)
  • double ## for our defactoring comments?
  • move defactoring code into separate cells
  • Who do we want to invite to review?
  • create a tag/version release of the current state of the notebook

'TIMELINE'

  • July/August '16

    • Notebook should be finished to a state where we are comforable about
      • soliciting feedback
      • defactoring as a method is clear defined
  • August/Sept.

    • Notebook published with Digits (A open science publishing platform initiated by Pitt, Perdue, Penn(?))
    • Start working on the article
  • Sept./Dec.

    • PR for Digits
    • PR and market the notebook
    • Select 'peers' on both sides of the pond to generate feedback
    • Have some instructions for these reviewers(?)
    • Related: we exercise open science: where does the feedback go? Hypothes.is might not be everybody's tool of choice?
  • Oct./Jan.

    • Get feedback
    • Present the noteboook at suitbale venues

Next meeting/homework

  • 19 July, 3pm CEST/9am EDT
  • Homework
    • Matt: notebook in a pre-final version
    • Joris: revisit the article scope/outline on Google Drive: is that still 'in sync', what needs changed? Start writing

Future Directions

  • creating PROV annotations for the notebook so we can model the computational process