OpenCon (2016)

Another conference, a Cambridge satellite meeting of OpenCon, and I quote here its mission: “OpenCon is a platform for the next generation to learn about Open Access, Open Education, and Open Data, develop critical skills, and catalyze action toward a more open system of research and education” targeted at students and early career academic professionals. But they do allow a few “late career” professionals to attend as well!

I could only attend the morning session, for which the keynote speaker was Erin McKiernanorcid The presentation was entitled How open science helps researchers succeedpresented as an exploration of an article written by Erin and colleagues with the same name and published in eLife[1] Erin has created a support page at http://whyopenresearch.org to augment the presentation and it’s well worth a visit.

One striking point made was the assertion that Open publications get more citations! 
Open publications get more citations

As with many metrics of the impacts of the science publication processes, a citation itself lacks the context of why it was made (see this post for further discussion), but the expectation is that a citation is “good”. From my perspective as a chemist, I did wonder why molecular science was missing from the graphic above. Do open chemistry publications also get more citations?

Which brings me to another point made during the talk, the increasingly controversial aspect of (journal) impact factors and the pressure placed on early career researchers to publish only in those with “high” impact factors, and for their careers to be assessed at least in part based on these and the anticipated “h-index”. The audience was indeed encouraged to go visit http://www.ascb.org/Dora/ (Declaration on Research Assessment, or Putting science into the assessment of research). Have you signed it yet?

Another manifestation of the modern trend to analyse impact metrics is the site Impactstory.org. This is a scripted resource that starts from your ORCID identifier and (optionally) your Twitter account (yes, apparently Tweets matter!) to derive a more complex alternative metric of a individual’s impacts. I had not tried this one before and so I submitted my ORCID and my Twitter account, and watched as the system went off to http://orcid.scopusfeedback.com (Scopus is an Elsevier product) to attempt to create my profile. It ground for quite a while, reporting initially that I had no publications! This was followed by an unexpected error; I did not get my impact back! But this experiment served to highlight one aspect that was discussed at the meeting; data and other research objects. The graphic above refers only to the citation of journal articles, it does not yet include the citation of data. However ORCID DOES include data and research objects as works.  And because the granularity of my data and research objects is very fine (one molecule = one work), I have quite a few. In fact ~200,000! ORCID gets to about 8000 before it gives up. I suspect http://orcid.scopusfeedback.com queries ORCID, gets back ~8000 entries and crashes. No doubt the programmer tasked with implementing this resource did not anticipate that any individual could accumulate 8000+ entries! Or probably factor in that the vast majority of these would of course not be journal articles but data. If the site gets back to me about the crash I experienced, I will update here.

Simon Deakin was the next speaker with (open) data as the focus and the worries many researchers have in being scooped by others who have re-used your open data without proper attributions. The discussion teased out that if data is properly deposited, it will indeed have full associated metadata and in particular a date stamp that could help protect an author’s interests.

It was really good to meet so many early career researchers who espouse the open ethos. Perhaps, in 20 years time,  another graphic akin to the one above might demonstrate that open researchers get more promotions!

References

  1. E.C. McKiernan, P.E. Bourne, C.T. Brown, S. Buck, A. Kenall, J. Lin, D. McDougall, B.A. Nosek, K. Ram, C.K. Soderberg, J.R. Spies, K. Thaney, A. Updegrove, K.H. Woo, and T. Yarkoni, "How open science helps researchers succeed", eLife, vol. 5, 2016. http://dx.doi.org/10.7554/elife.16800

Tags: , , , , , , , , , , , , , , , ,

3 Responses to “OpenCon (2016)”

  1. Henry Rzepa says:

    ImpactStory did get in touch: “Yes, you are right, we aren’t currently equipped to handle this many publications, and probably won’t be anytime soon. Sorry about that!”.

  2. Anuj Agarwal says:

    Hi Henry,

    My name is Anuj Agarwal. I’m Founder of Feedspot.

    I would like to personally congratulate you as your Henry Rzepa has been selected by our panelist as one of the Top 50 Chemistry Blogs on the web.

    http://blog.feedspot.com/chemistry_websites/

    I personally give you a high-five and want to thank you for your contribution to this world. This is the most comprehensive list of Top 50 Chemistry Blogs on the internet and I’m honored to have you as part of this!

    Also, you have the honor of displaying the badge on your blog.

    Best,
    Anuj

  3. Henry Rzepa says:

    With persistent identifiers being mooted for far more aspects of the research processes and outputs (for uninterpreted data, instruments, individuals, conference slides etc etc) it is useful to look very carefully into what aspects of this increased exposure the conventional journals might accept.

    So thus http://www.rsc.org/journals-books-databases/journal-authors-reviewers/processes-policies/#prior-publication has some interesting guidelines. I pick some examples of when authors publishing in our journals may present their research ahead of publication:

    1. In blogs, wikis, tweets, and other informal communication channels. So presentation in the “informal” medium you are reading now clearly does NOT preclude subsequent publication in a journal. The observant reader may indeed identify perhaps 10 examples of posts to be found here which subsequently were peer reviewed and published in conventional journals. I may indeed find the time to identify them all and list them.

    2. In an open electronic lab notebook. Great!!

    3. Data without interpretation, discussion, conclusions or context with a wider experimental project. This too is exactly what you will find here, i.e. PIDs to data, but links in that deposition to the location of interpretation, discussion and context.

    It would be really good to have such explicit declarations for the other important scientific publishers. It would also be good to have a persistent identifier allocated to these declarations. Thus http://www.rsc.org/journals-books-databases/journal-authors-reviewers/processes-policies/#prior-publication does not have such, but I think it should!

Leave a Reply