Difference between revisions of "JRA/JRAmeetings/2017"

From Synthesys3
Jump to: navigation, search
(Created page with "JRA Meeting 13-15 March 2017 Royal Botanic Garden Edinburgh Agenda Objective 3 Lead: Margaret Gold / Laurence Livermore Schedule: Summary of SYNTHESYS crowdsourcing work...")
 
Line 7: Line 7:
 
Agenda
 
Agenda
  
Objective 3
+
'''Objective 3'''
 
Lead: Margaret Gold / Laurence Livermore
 
Lead: Margaret Gold / Laurence Livermore
  
 
Schedule:
 
Schedule:
Summary of SYNTHESYS crowdsourcing work to date – 20 mins (LL?)
+
 
Current/ongoing crowdsourcing activities amongst partners – 60 mins (MG)
+
* Summary of SYNTHESYS crowdsourcing work to date – 20 mins (LL?)
key findings / statistics
+
* Current/ongoing crowdsourcing activities amongst partners – 60 mins (MG)
live demonstrations
+
* key findings / statistics
lessons learned
+
* live demonstrations
 +
* lessons learned
 +
 
 
Discussion:
 
Discussion:
Future of crowdsourcing for natural history collections / sustaining crowdsourcing beyond SYNTHESYS – Time TBC (MG)
+
* Future of crowdsourcing for natural history collections / sustaining crowdsourcing beyond SYNTHESYS – Time TBC (MG)
  
 
Discussion questions:
 
Discussion questions:
Can crowdsourcing scale to meet the demands of high-throughput digitisation (e.g. thousands of specimens each day)?
+
# Can crowdsourcing scale to meet the demands of high-throughput digitisation (e.g. thousands of specimens each day)?
Is label transcription via crowdsourcing cost effective? Should we consider paid outsourced transcription?
+
# Is label transcription via crowdsourcing cost effective? Should we consider paid outsourced transcription?
Is transcription a good way of engaging a diverse online audience with our specimens?
+
# Is transcription a good way of engaging a diverse online audience with our specimens?
Is it feasible to develop hybrid systems that combine OCR and use crowdsourcing only for tricky labels?
+
# Is it feasible to develop hybrid systems that combine OCR and use crowdsourcing only for tricky labels?
To what extent do partner institutions value the public participation / engagement component of crowdsourcing?
+
# To what extent do partner institutions value the public participation / engagement component of crowdsourcing?
  
Email requests:
+
Requests for participants:
What crowdsourcing projects are you currently running? Have any recently run crowdsourcing projects now been completed?
+
* What crowdsourcing projects are you currently running? Have any recently run crowdsourcing projects now been completed?
Invite all participants to talk about their institutes’ experience of crowdsourcing and statistics for second part of the schedule.
+
* Invite all participants to talk about their institutes’ experience of crowdsourcing and statistics for second part of the schedule.
What tracking methods did you implement, if any, and have you kept a cost profile?
+
* What tracking methods did you implement, if any, and have you kept a cost profile?
Are there others within your institution that are interested / engaged in this topic?
+
* Are there others within your institution that are interested / engaged in this topic?
Send out discussion questions
+
* Invitation to join the Crowdsourcing SIG discussion group https://groups.google.com/forum/#!forum/cit-sci-transcription (wider than just SYNTHESYS)
Include invitation to join the Crowdsourcing SIG discussion group https://groups.google.com/forum/#!forum/cit-sci-transcription (wider than just SYNTHESYS)
+
  
  
Objective 4
+
'''Objective 4'''
 
   
 
   
 
Lead: Laurence Livermore / Elspeth Haston
 
Lead: Laurence Livermore / Elspeth Haston

Revision as of 17:51, 7 March 2017

JRA Meeting

13-15 March 2017 Royal Botanic Garden Edinburgh


Agenda

Objective 3 Lead: Margaret Gold / Laurence Livermore

Schedule:

  • Summary of SYNTHESYS crowdsourcing work to date – 20 mins (LL?)
  • Current/ongoing crowdsourcing activities amongst partners – 60 mins (MG)
  • key findings / statistics
  • live demonstrations
  • lessons learned

Discussion:

  • Future of crowdsourcing for natural history collections / sustaining crowdsourcing beyond SYNTHESYS – Time TBC (MG)

Discussion questions:

  1. Can crowdsourcing scale to meet the demands of high-throughput digitisation (e.g. thousands of specimens each day)?
  2. Is label transcription via crowdsourcing cost effective? Should we consider paid outsourced transcription?
  3. Is transcription a good way of engaging a diverse online audience with our specimens?
  4. Is it feasible to develop hybrid systems that combine OCR and use crowdsourcing only for tricky labels?
  5. To what extent do partner institutions value the public participation / engagement component of crowdsourcing?

Requests for participants:

  • What crowdsourcing projects are you currently running? Have any recently run crowdsourcing projects now been completed?
  • Invite all participants to talk about their institutes’ experience of crowdsourcing and statistics for second part of the schedule.
  • What tracking methods did you implement, if any, and have you kept a cost profile?
  • Are there others within your institution that are interested / engaged in this topic?
  • Invitation to join the Crowdsourcing SIG discussion group https://groups.google.com/forum/#!forum/cit-sci-transcription (wider than just SYNTHESYS)


Objective 4

Lead: Laurence Livermore / Elspeth Haston

Schedule: Overview of Digitisation on Demand deliverable – 20 mins (LL) Current/ongoing digitisation activities amongst partners (round table summary by each institute) established or tested workflows, statistics and costs per specimen statistics of Access users with significant digitisation components to visit (may be hard to get statistics?) digital loan provision - processes and stats planned/future workflows (e.g. for NHM it would be Alice) Collections audit activities (with a focus on CSAT use and planned future use - NHM could talk about Join the Dots here) Provision of digitised data e.g. Data Portals and online collection databases (current provision and future provision?)

Discussion: Which of your collections are suitable for digitisation demand requests? Does your institution have workflows in place to handle these requests? How do you make your digitised collections available (for example, do you have an institutional data Portal?) What are your institutes’ plans for future collection audit and assessment activities. Are you personally involved in these or are others responsible for this. Do you use CSAT, what are the deficiencies of CSAT, how can we make CSAT collections categories more equivalent across institutions)? Does your institute have any plans for sharing and display of 3D data (e.g. ct scans) online?

Requests for participants: Please bring: “digital loan” request data, information on established digitisation workflows and collections audit data.