Difference between revisions of "Tools: Kaggle test data set"
m (→Set Two) |
|||
Line 1: | Line 1: | ||
'''This page is for the creation and organisation of a 240 image test data set for the [https://www.linkedin.com/pulse/proposed-signs-literacy-kaggle-research-competition-2018-greenstreet/ Signs of Literacy Kaggle research competition]. The competition will run from November 2018 to early January 2019.''' | '''This page is for the creation and organisation of a 240 image test data set for the [https://www.linkedin.com/pulse/proposed-signs-literacy-kaggle-research-competition-2018-greenstreet/ Signs of Literacy Kaggle research competition]. The competition will run from November 2018 to early January 2019.''' | ||
− | |||
− | |||
− | |||
− | |||
__TOC__ | __TOC__ | ||
− | == | + | ==Wikitable display of KaggleTestData as of Saturday, June 16th, 2018 @ 21.12 (n=33)== |
− | + | ||
− | + | {{#ask:[[Category:KaggleTestSnippets]] | |
− | [[ | + | |?Occupation |
− | + | |?Has signofftype | |
− | + | |?Has marketype | |
− | + | |?Has initialnumber | |
− | + | |?Has grade | |
− | + | |?Res country | |
− | + | |format=broadtable | |
− | + | |link=all | |
− | + | |headers=show | |
− | + | |searchlabel=... further results | |
− | + | |class=sortable wikitable smwtable | |
− | + | }} | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
---- | ---- | ||
− | + | [[File:Signs Of Literacy Kaggle Competition 14062018.PNG|600px|thumb|right|[https://www.linkedin.com/pulse/sponsor-groundbreaking-kaggle-research-competition-colin-greenstreet/ Signs of Literacy Kaggle research competition]]] | |
− | [[ | + | [[File:Test Snippet Array 16062018.PNG|600px|thumb|right|KaggleTestSnippet images are stored in MediaWiki]] |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
==Test data set== | ==Test data set== |
Revision as of 20:18, June 16, 2018
This page is for the creation and organisation of a 240 image test data set for the Signs of Literacy Kaggle research competition. The competition will run from November 2018 to early January 2019.
Contents
Wikitable display of KaggleTestData as of Saturday, June 16th, 2018 @ 21.12 (n=33)
Test data set
We will have 120 snippets and metadata from our English High Court of Admiralty data up on the MarineLives wiki by the end of this week (Friday, June 15th, 2018). We will add a further 120 snippets and metadata from the Alle Amsterdamser Akten (Dutch notarial archives) over the weekend.
In the short term, we need to submit a 240 graded snippet test data set to Kaggle by next Wednesday. Our medium term solution, with the help of Picturae, will be to have 10,000 images up on a Picturae controlled IIIF server, with the snippets created in Recogito referring back to the IIIF server images.
We are creating a simple semantic form, which will display the snippet, will display its classification as a marke, initial or signature, and which will allow the input of the metadata of name, occupation, age, place of residence and date of the source deposition or of the source notarial document.
Our semantic wiki will then allow all these snippets to be sorted by any aspect of the metadata, by their classification as marke, initial, or signature, and by the grading for sophistication of execution we choose to give them. We will create two sets of input metadata fields for four people - Colin Greenstreet, Dr Mark Hailwood, Mark Ponte and Dr Jelle van Lottum - one set of input fields will be a simple simple, medium, sophisticated tag; the second set of input fields will be a forced ranging of 1 to 40, with 1 as most sophisticated and 40 as least sophisticated.
Grading criteria
Once we have got the first 120 snippets up on the MarineLives wiki, we will grade the three classes of snippet (markes, initials, and signature) by "sophistication of execution". Rather than attempting to prediscuss what this means between the graders, we will each independently think about what grading criteria would look like for markes, initials and signatures, and then grade the 120 snippets within the three classes (not attempting to compare markes, initials and signatures as classes in terms of sophistication, just doing the grading within the classes.
We plan to grade in two ways:
Firstly, using our own criteria for sophistication of execution, we assign a "simple", "medium", "sophisticated" tag within their class to the markes, initials and signatures
Secondly, again using our own criteria for sophistication of execution, we rank the snippets within their class by sophistication of execution, with 1 for the most sophisticated and 40 for the least sophisticated. We will NOT allow ties, so each snippet will have a different ranking number.
We plan early next week to have a discussion amongst the graders about the definitions we have used, the criteria we have developed and applied, and to compare how consistent (or not) we as C21st humans were in grading C17th markes, initials and signatures.
Grading process
It will be interested to see what process the graders develop to do the grading, and not just the grading criteria and results. Dealing with comparing 40 markes, initials and signatures is probably just manageable - with just 40 snippets to grade we could if necessary paste them all onto a Powerpoint page and shuffle them round until you have them in a grading order that satisfies you, but that will clearly not work for 10,000 images
Conjoint analysis
We are still working on the idea of using conjoint analysis to present graders with random binary comparisons of markes, initials and signatures, and to allow input of a "more sophisticated/less sophisticated" binary choice. This method would enable us to cope with the forced ranking of 10,000 snippets, and would also lend itself to working with significant numbers of volunteers on a semi-automated basis to accumulate grading data.
Ideally, we would get a software developer interested in this. A solution using the Mirador IIIF viewer would be ideal, since it would force users into a close reading of images, using the Mirador IIIF viewer, and benefiting from the fact that Picturae will be putting all 10,000 source images for our Kaggle training data set onto a IIIF server.
We are also checking whether there is some off the shelf conjoint analysis software we could use.