Monday, February 16, 2015

Clinical Trial Data Transparency and Linked Data

I've with great interest been following the discussions about clinical trial transparency and sharing of clinical trial data for the last three years. More precisely - my first tweet about this is from early 2012:


There has been a lot of debates over these years of how much of results of clinical trial results being published - is 50% or much more? Journal article publications vs trial registries? A lot of issues around summary level data vs. patient level data, and around de-identification of data and redaction of documents etc.

All interesting topics but my interest in all of this is the opportunities in making data in, about and related to clinical trials, useful using semantic web standards and linked data principles. In the spring 2013 I wrote a post on my blog: Talking to Machines, about this after listening to Ben Goldacre, one of the key people behind the AllTrials initiative where he also acknowledged this:




Here are a couple of recent events, early 2015, related to Clinical Trial Data Transparency and Linked Data:
  • AAAS Panel on Innovations in Clinical Trial Registry
  • Public consultation EMA Clinical trial database
  • IoM report: Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk

AAAS Panel on Innovations in Clinical Trial Registry

So, I really liked what I saw in the program for a session yesterday evening (15 February, 2015) from the American Association for the Advancement of Science annual meeting in San Jose (#AAASmtg) in a panel on Innovations in Clinical Trial Registers
Documents relating to trials -- protocols, regulatory summaries of results, clinical study reports, consent forms, and patient information sheets -- are scattered in different places. It is difficult to track the information that is available, in order to audit for gaps in information and for doctors and regulators to be sure they have all the information they need to make decisions about medicines. There is an unprecedented opportunity to refine how clinical trial data are shared and linked.

Public consultation EMA Clinical trial database

This is similar to what I wrote last week when I tried to "act courageously" and responded to "the public consultation on how the transparency rules of the European Clinical Trial Regulation will be applied in the new clinical trial database is launched by the European Medicines Agency (EMA)."
Make use of modern data standards and access methods to make the access to the clinical trial database developer-friendly, data machine-processable and the trials and their components linkable. Leverage initiatives and use principles, such as CDISC Standards in RDF (under review), that uses modern data standards from W3C stack of semantic web standards, openFDA that uses developer-friendly REST APIs JSON (openFDA API reference), and the linked data principles.

IoM report: Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk

A couple of weeks ago the Institute of Medicine (IOM) released an excellent report: Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk.

Short summary, as I interpret the core message of the report: Instead of just designing and planning a study, scientists need to plan and document how they're going to share the data from that study so that its usable to others who may want to re-analyze it.

The report has a well written section on “legacy trials” and an interesting listing of challenges:

Infrastructure challenges—Currently there are insufficient platforms to store and manage clinical trial data under a variety of access models. 
Technological challenges—Current data sharing platforms are not consistently discoverable, searchable, and interoperable. Special attention is needed to the development and adoption of common protocol data models and common data elements to ensure meaningful computation across disparate trials and databases. A federated query system of “bringing the data to the question” may offer effective ways of achieving the benefits of sharing clinical trial data while mitigating its risks. 
Workforce challenges—A sufficient workforce with the skills and knowledge to manage the operational and technical aspects of data sharing needs to be developed. 
Sustainability challenges—Currently the costs of data sharing are borne by a small subset of sponsors, funders, and clinical trialists; for data sharing to be sustainable, costs will need to be distributed equitably across both data generators and users.

And for a ”clinical trial data and metadata nerd” as me this is like music :-)

Just because data are accessible does not mean they are usable. Data are usable only if an investigator can search and retrieve them, can make sense of them, and can analyze them within a single trial or combine them across multiple trials. Given the large volume of data anticipated from the sharing of clinical trial data, the data must be in a computable form amenable to automated methods of search, analysis, and visualization.


To ensure such computability, data cannot be shared only as document files (e.g., PDF, Word). Rather, data must be in electronic databases that clearly specify the meaning of the data so that the database can respond correctly to queries. If data are spread over more than one database, the meaning of the data must be compatible across databases; otherwise, queries cannot be executed at all, or are executable but elicit incorrect answers. In general, such compatibility requires the adoption of common data models that all results databases would either use or be compatible with.

Wednesday, November 5, 2014

ISWC2014 Trip Report

A few highlights from five intensive days at the International Semantic Web Conference (ISWC2014) in lovely Riva del Garda. See also my previous blog post Preparing for ISWC2014 and my live blog from all five days using Storify.

ISWC2014 Storify

Strong industry presence

ISWC is a research focused conference. However, this year it had a strong industry prescence with a full day Industry track, Semantic Developer workshop and many of the Lighning Talks came from industry. It was great to meet Business Analysts and Information Architects from large companies such as Roche, Genentech and NXP Semiconductors and also from small companies such as the Danish StatGroup.
  • All five Information Architects in the Data Standards Office at Roche / Genentech attended all five days to learn more about latest in semantic web research, especially traceability and provenance. Frederik Malfait, working for Roche and FDA/Phuse, described their RDF implementations of clinical trial data standards is the basis for a model driven architecture enabling computable protocols, component based authoring and automation of setting up clinical trial databases and generating submission datasets.
  • Marc Andersen, one of the two founders of StatGroup, presented the experience of the Pharmaceutical Users Software Exchange (PhUSE) developing a semantic representation of statistical results based on RDF and OWL. Providing clinical trial results as linked data will facilitate traceability, data sharing and integration, data mining and meta-analysis benefiting industry, regulatory authorities and the general public.
  • A business analyst described how NXP Semiconductor is making use of Semantic Web technology such as RDF and SPARQL to manage a product taxonomy for marketing purposes that forms the key navigation of the NXP website. 

Hot topics: Developer friendly, Linked Data Fragments, Provenance and Semantics for Sensors

  • The Semantic Developer Workshop and the conference program included many examples of RDF and SPARQL support in traditional programming languages, such as Java, Perl, C# and Javascript, as well as in data science languages, such as Python and R. The Semantic Developer of the Year, Kjetil Kernsmo, from Oslo University, presented RDF/Linked Data for Perl. JSON-LD was refereed to as the the developer-friendly serialization of RDF.
  • Many of the presentations described how they applied the Provenance standard from W3C for "information about entities, activities, and people involved in producing a piece of data or thing, which can be used to form assessments about its quality, reliability or trustworthiness." One example was how the standards had been used event based traceability in pharmaceutical supply chains via automated generation of linked pedigrees.
  • Semantics for Sensors for every-thing from smart building diagnostic,  traceability  in pharmaceuticals supply chain, and traffic diagnosis to predicting frost in vineyards on Tasmania.
  • "Everyone" talked about the work presented on the best awarded poster: Linked Data Fragments "so light-weight that even a Raspberry Pi can publish DBpedia (Wikipedia structured content) with high availability" http://fragments.dbpedia.org/ 

Best workshop paper award

It was very nice to present our joint EHR4CR, Open PHACTS, SALUS and W3C HCLS paper. It got a best paper award in the pre-conference workshop: Context, Interpretation and Meaning for the Semantic Web.

Other ISWC2014 reports

Tuesday, October 28, 2014

RDF as a Universal Healthcare Exchange Language

Here's a short post about a nice webinar serie: Yosemite Manifesto proposing RDF as a Universal Healthcare Exchange Language. It is provided by  Semantic Technology & Business (@semanticweb).

Here are a couple of tweets I posted during Part 1 (video and slides) with David Booth.





The Yosemite manifesto has been critized. I recommend a "very civil discussion, in the face of clear disagreement" between David Booth, Thomas Beale (@wolands_cat) and Dean Allemang (@WorkingOntology): RDF for universal health data exchange? Correcting some basic misconceptions…

I look forward to Part 2, Friday 7 November evening (8pm CET), when Conor Dowling, Caregraph will talk about: "Lab tests and results have many dimensions from substances measured to timing to the condition of a patient. This presentation will show how RDF is the best medium to fully capture this highly nuanced data."

Tuesday, October 14, 2014

Preparing for ISWC2014

Next week I’ll have the great pleasure to attend my first Int. Semantic Web Conference (ISWC). I've been  fascinated by the power of the semantic web stack of standards for many years. Standards all based on the RDF model to represent and link data, as well as schemas, models and terminologies. I heard Tim Berners-Lee talk about the Semantic Web for the first time at the WWW8 conference back in 1999 in Toronto, Canada. 

13th International Semantic Web Conference, ISWC 2014,
will take place in Riva del Garda, Terentino,Italy
At ISWC2014 I’ll present a follow-up paper to the one I presented in early September at the Medical Informatics Europe conference (see my earlier blog post: Preparing for MIE2014). It is a joint paper with colleagus from IMI EHR4CR, Open PHACTS, FP7 SALUS, W3C HCLS. Now with more details on the use of nanopubs and linksets in A Justification-based Semantic Framework for Representing, Evaluating and Utilizing Terminology Mappings. It will be discussed on Sunday in a pre-conference workshop organized by Alasdair Gray (@gray_alasdair), Paul Groth (@pgroth) et al. Workshop on Context, Interpretation and Meaning

I'm also also looking forward to participate in the Semantic Statistics workshop to learn more about things like the Data Cube for statistical data. This is a highly relevant topic for FDA/PhUSE and CDISC as representing clinical trial analysis results data as RDF Data Cubes is a topic at the ongoing PhUSE conference (presentation from UCB) and at the upcoming CDISC Interchange (presentation from DIcore Group, LLC, SAS Data Submission Consulting Services).

Tuesday evening I’ve been asked by James Malone (@jamesmalone) from EBI to sit on a panel at the European Ontology Network (EUON) Town Hall meeting together with Mark Musen, Stanford Center for Biomedical Informatics Research (I’m star strucked ;-) (Started earlier this year at the 1st EUON workshop in Amsterdam.)

I’m looking forward to meet interesting people from the semantic web community, and also newcombers to the community from organizations such as WHO, Roche, and CDISC. Magnus Wallberg from Uppsala Monitoring Centre WHO, working on a API and  RDF project for Global ICSR statistics. Also from the FDA/PhUSE Semantic Technology project there will be presentations form Landen Bain, CDISC, and Frederik Malfait, Roche, will present at the Industry Track on Wednesday.

I’ll try to keep my Storify-ISWC2014 updated during the week with interesting tweets, links and notes. And I know the Twitter-tag #iswc2014 feed will be lively as I’ve followed several earlier ISWC conferences on a distance.  

Monday, August 25, 2014

Preparing for MIE2014

After a fantastic warm and sunny summer here in Sweden it's time for me to get prepared for the European Medical Informatics Conference - MIE2014, Istanbul, 31 Aug. to 3 Sept.


Our joint paper co-authored by members across the IMI EHR4CR, Open PHACTS, SALUS projects and W3C HCLS community describing "A Framework for Evaluating and Utilizing Medical Terminology Mappings" has been accepted. And I have got the opportunity to present it in the main conference on the 2th September. 


For me the paper started from some great discussions at ICBO (Int. Conference Biomedical Ontologies) in Montreal last year with Trish Whetzel (@TrishWhetzel) and Jim McCusker (@jpmccu) on the topic: "mappings are not sufficient - need the justifications for the mappings". We started to talk about using so called Nanopublications to capture the justification for the mapping for users to make better use of for example the mappings provided via the NCBO Bioportal.

When I came back from the ICBO conference I wrote a blog post outlining some more ideas on using Nanopublications and/or Linksets, both stemming from the IMI Open PHACTS project. Some nice comments and sharing of my blog post: Justifications of Mappings encourage me to work more on these ideas. My colleague in the EHR4CR project, Sajjad Hussain (+Sajjad Hussain), pointed me to a very interesting blog post: SALUS project on Terminology Mappings. After some great discussion over a lunch at the SWAT4LS conference in Edinburgh with Hong Sun, from SALUS, Charlie Mead and Eric Prud'hommeaux, from W3C HCLS, Alasdair Gray (@gray_alasdair) from Open PHACTS, and many more, Sajjad and I started to outline a paper decribing a framework combining solutions and ideas on evaluating and utilizing terminology mappings.

Beside presenting this paper I look forward to participate in an MIE2014 tutorial and workshop:
  • Tutorial on the IEEE 11073 Standards for Personal Health Devices (Wikipedia: ISO/IEEE_11073). This is a standard I have been looking into earlier. It nicely combines my interest in clinical trials and health care data standards together with my previous industrial PhD studies in Mobile Informatics (see the slides presenting my PhLic thesis from 2001: Mobile Newsmaking).
  • Workshop on Interoperability Challenges for enabling secondary use of Electronic Health Records — ICEH 2014 In this workshop I look foward to meet and talk with many including the great metadata and ontology experts Gokce Laleci Ertukmen and Anil Pacaci (@aasinaci), Software Research, Development and Consultancy, Turkey.

I hope to be able to use my Twitter (@kerfors) feed to share interesting things I learn about in the conference, and from the historic city of Istanbul. And gather tweets, links, photos from each day using Storify. In the same way as I have done from eralier conferences. 

So, have a look at my MIE2014 Storify for daily updates 31 Aug. to 3 Sept.

Friday, June 13, 2014

openFDA a Game Changer?

I’ve been fascinated by innovative people in the FDA organization since I had the pleasure to meet Dr Norman Stockbridge, the father of FDA’s Janus datawarehouse model, F2F back in 2005 in Washington, DC. 

So when I saw some early notes about an openFDA initiative in June 2013 and early 2014 I posted a couple of tweets.



In April I wrote a short blog post about openFDA. And, when I saw how the new Chief Health Informatics Officer at FDA, Taha Kass-Hout (@DrTaha_FDA) started to count down on Twitter a couple of weeks ago I got really excited. It was nice to follow the #hdpalooz feed on Twitter from the health care data event in early June when openFDA was launched.



And, also to see services that directly were picking up the first openFDA API and launced services and apps to search the 3.4 million adverse events, such as Research AE



For a brilliant intro to what sits behind the first openFDA API I recommend Alex Howard's (@digiphile) excellent article: openFDA launches open data platform for consumer protection openFDA launches open data platform for consumer protection.
"Instead of contracting with a huge systems integrator.. FDA worked with a tiny data science startup.. to harmonize the data, create a cutting-edge website, and write and release open source code for a data publishing platform for it [on GitHub]"

I think this will be a game changer for how we think about open data, open source and open communities in industry. And yes, I do think we will soon will see much more Open, and Linked Data from FDA, and hopefully also from EMA and across industry.

Kudos to the devlopers behind all of this great work,
e.g. Sean Herron (@seanherron) and  Brian Norris (@Geek_Nurse)

Wednesday, April 2, 2014

openFDA

It's exiciting to see how the FDA (Food and Drug Administration) now starts to make some nice buzz about their new project called openFDA:  A research project to provide open APIs, raw data downloads, documentation and examples, and a developer community for an important collection of FDA public datasets.

Excellent blog post from Dr. Taha Kass-Hout (@DrTaha_FDA), Chief Health Informatics Officer of FDA. He writes: "Our initial pilot project will cover a number of datasets from various areas within FDA, defined into three broad focus areas: Adverse Events, Product Recalls, and Product Labeling."

Introducing oepnFDA
I do hope that the idea of not only open, but also linked data, will be part of this effort. For a quick intro to Why Linked Data? check out this nice video explaining the utility of linked data and how its being used by the UK's Ordnance Survey.


I don't have the full context to all of this, but I may think there are some excellent opportunties for Dr Kass-Hout and his team to leverage linked data intitative such as these: