Difference between revisions of "Tools: Kaggle test data set"
m (→Test data set) |
m (→Access test data set) |
||
Line 16: | Line 16: | ||
[[KaggleTestSnippets: HCA 13/70 f.314v|KaggleTestSnippets: HCA 13/70 f.314v]] Marke, Cross-hatch; Initial | [[KaggleTestSnippets: HCA 13/70 f.314v|KaggleTestSnippets: HCA 13/70 f.314v]] Marke, Cross-hatch; Initial | ||
[[KaggleTestSnippets: HCA 13/70 f.316r|KaggleTestSnippets: HCA 13/70 f.316r]] Marke, Circle | [[KaggleTestSnippets: HCA 13/70 f.316r|KaggleTestSnippets: HCA 13/70 f.316r]] Marke, Circle | ||
+ | [[KaggleTestSnippets: HCA 13/70 f.316v|KaggleTestSnippets: HCA 13/70 f.316v]] Signature | ||
[[KaggleTestSnippets: HCA 13/70 f.671v|KaggleTestSnippets: HCA 13/70 f.671v]] Initial | [[KaggleTestSnippets: HCA 13/70 f.671v|KaggleTestSnippets: HCA 13/70 f.671v]] Initial | ||
[[KaggleTestSnippets: HCA 13/71 f.448v|KaggleTestSnippets: HCA 13/71 f.448v]] Initial | [[KaggleTestSnippets: HCA 13/71 f.448v|KaggleTestSnippets: HCA 13/71 f.448v]] Initial |
Revision as of 07:09, June 15, 2018
This page is for the creation and organisation of a 240 image test data set for the Signs of Literacy Kaggle research competition. The competition will run from November 2018 to early January 2019.
Contents
Access test data set
KaggleTestSnippets: HCA 13/53 f.87r Marke, Anchor
KaggleTestSnippets: HCA 13/53 f.163v Initial
KaggleTestSnippets: HCA 13/53 f.166r Signature
KaggleTestSnippets: HCA 13/68 f.17r Initial; Marke, Curved form
KaggleTestSnippets: HCA 13/68 f.20r Marke, Squiggle
KaggleTestSnippets: HCA 13/68 f.25r Initial
KaggleTestSnippets: HCA 13/68 f.81v Marke, Anchor
KaggleTestSnippets: HCA 13/70 f.314v Marke, Cross-hatch; Initial
KaggleTestSnippets: HCA 13/70 f.316r Marke, Circle
KaggleTestSnippets: HCA 13/70 f.316v Signature
KaggleTestSnippets: HCA 13/70 f.671v Initial
KaggleTestSnippets: HCA 13/71 f.448v Initial
KaggleTestSnippets: HCA 13/71 f.449r Marke, Star
KaggleTestSnippets: HCA 13/71 f.452r Marke, Cross
KaggleTestSnippets: HCA 13/71 f.452v Signature
KaggleTestSnippets: HCA 13/71 f.455r Initial
KaggleTestSnippets: HCA 13/71 f.497v Marke, Cross
KaggleTestSnippets: HCA 13/72 f.32v Initial
KaggleTestSnippets: HCA 13/72 f.34v Initial; Marke, Squiggle
KaggleTestSnippets: HCA 13/73 f.36r Marke, Anchor
KaggleTestSnippets: HCA 13/73 f.486v Marke, Anchor
KaggleTestSnippets: HCA 13/73 f.772r Initial
Test data set
We will have 120 snippets and metadata from our English High Court of Admiralty data up on the MarineLives wiki by the end of this week (Friday, June 15th, 2018). We will add a further 120 snippets and metadata from the Alle Amsterdamser Akten (Dutch notarial archives) over the weekend.
In the short term, we need to submit a 240 graded snippet test data set to Kaggle by next Wednesday. Our medium term solution, with the help of Picturae, will be to have 10,000 images up on a Picturae controlled IIIF server, with the snippets created in Recogito referring back to the IIIF server images.
We are creating a simple semantic form, which will display the snippet, will display its classification as a marke, initial or signature, and which will allow the input of the metadata of name, occupation, age, place of residence and date of the source deposition or of the source notarial document.
Our semantic wiki will then allow all these snippets to be sorted by any aspect of the metadata, by their classification as marke, initial, or signature, and by the grading for sophistication of execution we choose to give them. We will create two sets of input metadata fields for four people - Colin Greenstreet, Dr Mark Hailwood, Mark Ponte and Dr Jelle van Lottum - one set of input fields will be a simple simple, medium, sophisticated tag; the second set of input fields will be a forced ranging of 1 to 40, with 1 as most sophisticated and 40 as least sophisticated.
Grading criteria
Once we have got the first 120 snippets up on the MarineLives wiki, we will grade the three classes of snippet (markes, initials, and signature) by "sophistication of execution". Rather than attempting to prediscuss what this means between the graders, we will each independently think about what grading criteria would look like for markes, initials and signatures, and then grade the 120 snippets within the three classes (not attempting to compare markes, initials and signatures as classes in terms of sophistication, just doing the grading within the classes.
We plan to grade in two ways:
Firstly, using our own criteria for sophistication of execution, we assign a "simple", "medium", "sophisticated" tag within their class to the markes, initials and signatures
Secondly, again using our own criteria for sophistication of execution, we rank the snippets within their class by sophistication of execution, with 1 for the most sophisticated and 40 for the least sophisticated. We will NOT allow ties, so each snippet will have a different ranking number.
We plan early next week to have a discussion amongst the graders about the definitions we have used, the criteria we have developed and applied, and to compare how consistent (or not) we as C21st humans were in grading C17th markes, initials and signatures.
Grading process
It will be interested to see what process the graders develop to do the grading, and not just the grading criteria and results. Dealing with comparing 40 markes, initials and signatures is probably just manageable - with just 40 snippets to grade we could if necessary paste them all onto a Powerpoint page and shuffle them round until you have them in a grading order that satisfies you, but that will clearly not work for 10,000 images
Conjoint analysis
We are still working on the idea of using conjoint analysis to present graders with random binary comparisons of markes, initials and signatures, and to allow input of a "more sophisticated/less sophisticated" binary choice. This method would enable us to cope with the forced ranking of 10,000 snippets, and would also lend itself to working with significant numbers of volunteers on a semi-automated basis to accumulate grading data.
Ideally, we would get a software developer interested in this. A solution using the Mirador IIIF viewer would be ideal, since it would force users into a close reading of images, using the Mirador IIIF viewer, and benefiting from the fact that Picturae will be putting all 10,000 source images for our Kaggle training data set onto a IIIF server.
We are also checking whether there is some off the shelf conjoint analysis software we could use.