Active Lesson Requests
The Editorial Board of the English-language version of the Programming Historian would particularly welcome hearing from prospective authors on the following lessons. Anyone interested should consult the Author’s Guidelines for more information:
What can you conclude from topic models?
We requested this back in early 2017, but no one has yet written it, so we’re putting out the plea again: we’ve got a great lesson on how to conduct a topic model using MALLET. It’s been extraordinarily popular over the years. But we’re still not seeing enough historians (and humanists) actually publishing topic-model-based research results. If you’ve done so, please write us a tutorial on how others can do so too. This is a great opportunity to share the HOW of your article (all the bits the peer reviewers told you to take out so you could focus on the WHAT).
How do you conduct spatial clustering of geographic data?
Another re-request from our wish-list in 2017. We’ve got a great set of introductory mapping lessons, and while they are great for teaching how to make nice visualisations, we’ve not yet branched deeply enough into more advanced analysis skills. One of the most useful is the application of clustering algorithms, which identify logical groups of individual points in geographic space. Useful for forming conclusions on anything from trade to migration. But like with all analyses, it’s a space (no pun intended) fraught with pitfalls for the uninitiated. We’re looking for a great introduction that highlights both the strengths and the challenges of this form of analysis.
When do you know your network analysis is meaningful?
Again, one that we asked for previously, but haven’t yet seen. Ok, so we’ve built a great network diagram. How do we move to the next step and form meaningful conclusions? This is about starting with a graph and shifting into analysis mode. If you can help our readers take that step, we want to hear from you.
TF-IDF to Historical Research
And the last of our unfulfilled requests: let’s talk about meaningful words. Term Frequency - Inverse Document Frequency is a well known means of identifying words that appear more often than we might expect in a given document. It’s one of the ways we know what a document is about. Let’s take this to the next step and teach readers how this fairly simple statistic about meaningful words can turn into meaningful research outputs. If you’ve published on historical lingustics (as in #1 above), we’d love to hear from you on the HOW TO of your wonderful paper.
Space Syntax of Historical Data
After having seen a great workshop by Katrina Navickas on Space Syntax we’d like to learn more about it and how historians can apply this geographical approach in their research.
How to Publish Digital Scholarly Editions using a Native XML Database
Digital scholarly editions are often modelled as XML documents. Although there are some publication tools such available, the options can be bewildering and the solutions may not fit a user’s particular needs. Keeping with our open ethos, we’d like to see someone take readers through an open source solution that gives them a sustainable and flexible way to publish their digital edition.
How to Analyse Audio Artefacts
We have one lesson on how to use Audacity to edit audio files and another on how to transform you transform your data into audio to better understand it. But you can do much more! How are you using tools to get quantifiable data about your audio artefacts? Or, how can you use machine learning techniques to produce new understandings of an audio collection?