Archive for the ‘ Metadata ’ Category

Request for comments/volunteers for the Aristotle Metadata Registry

This is a request for comments and volunteers for an open source ISO 11179 metadata registry I have been working on called the Aristotle Metadata Registry (Aristotle-MDR). Aristotle-MDR is a Django/Python application that provides an authoring environment for a wide variety of 11179 compliant metadata objects with a focus to being multilingual. As such, I’m hoping to raise interest around bug checkers, translators, experienced HTML and Python programmers and data modelers for mapping of ISO 11179 to DDI3.2 (and potentially other formats).

For the eager:


Aristotle-MDR is based on the Australian Institute of Health and Welfare’s METeOR Registry, an ISO 11179 compliant authoring tool that manages several thousand metadata items for tracking health, community services, hospital and primary care statistics. I have undertaken the Aristotle-MDR project to build upon the ideas behind Meteor, and extend it to improve compliance with 11179, but to also allow for access and discovery using other standards, including DDI and GSIM.

Aristotle-MDR is build on a number of existing open source frameworks, including Django, Haystack, Bootstrap and jQuery which allows it to easily scale from mobile to desktop on the client side, and scale from small shared hosting to full-scale enterprise environments on the server side. Along with the in-built authoring suite is the Haystack search platform which allows for a range of searching solutions from enterprise search such as Solr or Elastisearch, to smaller scale search engines.

The goal of the Aristotle-MDR is to conform to the ISO/IEC 11179 standard as closely as possible, so while it has a limited range of metadata objects, much like the 11179 standard it allows for the easy extension and inclusion of additional items. Among those already available, are extensions for:

Information on how to create custom objects can be found in the documentation:

Due to the wide variety of needs for users to access information, there is a download extension API that allows for the creation of a wide variety of download formats. Included is the ability to generate PDF versions of content from simple HTML templates, but an additional module allows for the creation of DDI3.2 (at the moment this supports a small number of objects only):

As mentioned, this is a call for comments and volunteers. First and foremost I’d appreciate as much help as possible with my mapping of 11179 objects in DDI3.2 (or earlier versions), but also with the translations for the user interface – which is currently available in English and Swedish (thanks to Olof Olsson). Partial translations into other languages are available thanks to translations in the Django source code, but additional translations around technical terms would be appreciated. More information on how to contribute to translating is available on the wiki:

To aid with this I’ve added a few blank translation files in common languages. Once the repository is forked, it should be relatively straightforward to edit these in Github and send a pull request back without having to pull down the entire codebase. These are listed by ISO 639-1 code, and if you don’t see your own listed let me know and I can quickly pop a boilerplate translation file in.

If you find bugs or identify areas of work, feel free to raise them either by emailing me or by raising a bug on Github:

Aristotle MetaData Registry now has a Github organisation

This weekends task has been upgrading Aristotle from a single user repository to a Github organisation. The new Aristotle-MDR organisation holds the main code for the Aristotle Metadata Registry, but alongside that it also has the DDI Utilities codebase and some additional extensions, along with the new “Aristotle Glossary” extension.

This new extension pulls the Glossary code base out of the code code to improve it status as a “pure” ISO/IEC 11179 implementation as stated in the Aristotle-MDR mission statement. It will also provide additional Django post-save hooks to provide easy look-ups from Glossary items, to any item that requires the glossary item in its definition.

If you are curious about the procedure for migrating an existing project from a personal repository to an organisation, I’ve written a step-by-step guide on StackExchange that runs through all of the steps and potential issues.

Aristotle-Metadata-Registry – My worst kept secret

About 6 months ago I stopped frequently blogging, as I began work on a project that was not quite ready for a wider audience, but today that period comes to a close.

Over the past year, I have been working on a new piece of open-source software – an ISO/IEC 11179 metadata registry. This originally began from my experiences working on the Meteor Metadata Registry, which gave me an in-depth understanding of the systems and governance issues around the management of metadata across large scale organisations. I believe Aristotle-MDR provides one of the closest open-source implementations of the information model of Part 6 and the registration workflows of Part 3, in an easy to use and install piece of open-source software.

In that time, Aristotle-MDR has grown to several thousand lines of code, most substantially over 5000 line of rigorously tested Python code, tested using a suit of over 500 regression tests, and rich documentation covering installation, configuration and extension. From a front-end perspective, Aristotle-MDR uses the Bootstrap, CKEditor and jQuery libraries to provide a seemless, responsive experience, the use of the Haystack search engine provides scalable and accurate search capability, while custom wizards encourage the discovery and reuse metadata at the point of content creation.

One of the guiding principles of Aristotle-MDR has been to not only model 11179 straight-forward fashion, but do so in a way that complies with the extension principles of the standard itself. To this end, while the data model of Aristotle-MDR is and will remain quite bare-bones, it provides a robust, tested framework on which extensions can be built. Already a number of such extensions are being built, including those for the management of datasets, questionnaires, and performance indicators and for the sharing of information in the Data Documentation Initiative XML Format.

In the last 12 months, I have learned a lot as a systems developer, had the opportunity to contribute to several Django-based projects and look forward to sharing Aristotle, especially at IASSIST 2015 where I aim to present Aristotle-MDR as a stable 1.0 release. In the interim, there is a demonstration server for Aristotle available, with two guest accounts and a few hundred example items for people to use, test and possibly break.

Why Linus Torvalds is wrong about XML

Linus Torvalds is one of the most revered figures in modern computer science and has made the kind of contributions to the world that I hope to achieve. However, given his global audience, his recent statements about XML give me pause for reflection.

I have worked with XML in a number of jobs, helped with the specification of international XML formats, written tutorials on their use, and even made my own XML format (with reason I might add). And I must say, in reply to Linus’s statement that

XML is the worst format ever designed

XML isn’t the problem, it is more that the problem is bad programmers. Computer Science is a broad field, not covering just the creation of programs, but also the correct specification of information for computation. The lack of appreciation for that second aspect has seen the recent rise of “Data Science” as a field – a mash of statistics, data management and programming.

While it is undenyable that many programmers write bad XML, this is because of poor understanding and discipline. One could equally say, people write bad code, lets stop them writing code. People will always make mistakes or cut corners, the solution is education, not reinventing the wheel.

Linus and the rest of the Subsurface team are well within their rights to use the data formats they choose, and I am eager to see what new formats he can design. But with that in mind, I will address some of the critiques of Linus and others about XML and point out their issues, followed by some handy tips for programmers looking at using XML.

XML should be human readable

I did the best that I could with XML, and I suspect the subsurface XML is about as pretty and human-readable as you can make that crap

CSV isn’t very readable, C, Perl and Python aren’t very human readable. What is “human-readable” is very subjective, as even English isn’t human-readable to non-English speakers.

Restricting ourselves to just technology, CSV isn’t very readable as for any non-trivial amount of data as the header will scroll off the top of the screen, and data will overflow onto the next line or outside the horizonal boundaries of the screen. One could argue that its possible in Excel, OpenOffice or using a VIm/Emacs plugin to lock the headers to the top of the screen – and now we have used a tool to overcome limitations in the format.

Likewise, the same can be said for computer code, code-folding, auto-completion of long function and variable names and syntax highlighting are all software features to overcome failures in the format and make the output more “human-readable”. Plain-text supports none of the above, yet no one would recommend using Notepad to write code for the lack of features.

Likewise, I would never, ever recommend writing XML in a non-XML editor. Auto-adding of closing tags, checking schema as you type, easy access to the schema via hotlinks from elements and attributes, and XPath query and replace are all vital functions of a good XML editor. All of these make writing XML much easier and approachable, and compared to code or CSV, a programmer should spend only as much time in an XML editor to understand the format to make writing XML in code easier.

While it can be said that a poor craftsman blames his tools, a good craftsman knows when to use the right tools as well.

XML files should standalone

This is most visible in this bug raised in Subsurface where it is stated that:

Subsurface only ever stores metric units. But our goal is to create files that make sense and can be read and understood without additional information.

Now, examination of a sample of the XML from subsurface shows a glaring contradiction. There is nothing in this file that says that units are in metric. The distance ‘m’ could equally stand for ‘miles’, and while the order of magnitude would make misinterpretation for a human hard, a dive computer with an incorrect understanding may miscalculate the required oxygen pressure leading to potential death. To accurately understand this file, I need to find the documentation, i.e additional information. The reason for schema is to explicitly describe a data file.

Additionally, because data is stored as “human-readable” strings, I could validly put in “thirty metres” instead of “30.0 m” as a depth. At this point the program might fail, but as someone writing the data elsewhere I’d have no reason why. Apart from being a description of the data, schema exists as a contract. If you say the data is of this form, then these are the rules you must conform to. When you are looking at sharing data between programs or organisations this ability to lean on a technical enforcement is invaluable as making “bad” data is that much harder.

XML shouldn’t need other formats

This is a tricky one, as when people think of XML, even if they have made a schema their mid stops there. XML isn’t just a format, its more a suit of related formats that can make handing and manipulating information easier.

Its worth noting that people have raised databases within that thread as an alternative – SQL is only a query language, but requires the formal Database Definition Language to describe the data and an engine to query over it. Likewise, HTML without CSS, Javascript or any number of programming and templating languages that power the web would be much less useful to the general public.

Similarly, isolating XML from XML schemas, mean your data has no structure. Isolating XML from XQuery and XPath mean you have no way of querying your data. Without XSLT there is no easy, declarative way to transform XML, and having done this with traditional languages and XSLT, the latter makes using and transforming XML much easier. Ultimately, using XML without taking advantage of many of the technologies that exist in the entire XML landscape is not using technologies to its best.

Tips for good XML

With all of that aside, XML like all technologies can be used poorly. However, when done well and documented properly, a good XML format with an appropriate schema can reduce errors and give vital metadata that gives data context and longevity. So I present a few handy tips for using XML well.

  1. Only use XML when appropriateXML is best suited to complex data, especially hierarchical data. As Linus (and others) points out in the linked thread tabular data is much better suited to CSV or more structured tablular formats, simple key values can be stored in ini files, and markup text can be done in HTML, Markdown or any number of other formats.
  2. Look for other formats.If you are thinking of using XML for your tool – stop and see what others have already done. The world doesn’t need another format, so if you are thinking of doing so you should have a very, very good reason to do so.
  3. Use a schema or doctypeIf you are chosing to make your own format, this is the most important point. If you chose to use XML, make a schema. How you choose to capture this Doctype, XSD Schema, Schematron, Relax NG is largely irrelevant. What is important is that your data format is documented. There are even tools that can automate creating schema stubs from documents, so there is no excuse not to. As stated an XML schema is the formal contract about what your data is and lets others know that if the data doesn’t conform to this format then it is broken.
  4. Use XML datatypesXML already has specifications for text, numeric, datetime and identification data. Use these as a starting point for your data.
  5. Store one type of data per field.While the difference between <dive duration="30:00 mins"> and <dive duration="30" durationUnit="mins"> is minimal, the former uses a single string for two pieces of data, while the latter uses two fields, a number and an enumerable, each storing one piece of data. An even better solution is using the XML duration data type <dive duration="PT30M"> based on the existing ISO 8601 standard.

A Request for Comments on a new XML Questionnaire Specification Format (SQBL)

This is an announcement and Request for Comments on SQBL a new
open-source XML format for the cross-platform development of questionnaire
specifications. The design decisions behind SQBL and additional details are the
subject of a paper to be presented in 2 weeks at the 2013 IASSIST conference in
Cologne, Germany:
– Do We Need a Perfect Metadata Standard or is “Good Enough” Good Enough?
However, to ensure people are well-informed ahead time, I am releasing details
ahead to conference.

The gist

SQBL – The Structured (or Simple) Questionnaire Building Language is an
emerging XML format designed to allow survey researchers of all fields to
easily produce questionnaire specifications with the required structure to
enable deployment to any questionnaire platform – including, but not limited
to, Blaise, DDI, LimeSurvey, XForms and paper surveys.

The problem

Analysing the current state of questionnaire design and development shows that
there are relatively few tools available that are capable of allowing a survey
designer to easily create questionnaire specifications in a simple manner,
whilst providing the structure necessary to verify respondent routing and
provide a reliable input to the automation of questionnaire deployment.

Of the current questionnaire creations tools available, they either:
Prevent the sharing of content (such as closed tools like SurveyMonkey)
Require extensive programming experience (such as Blaise or CASES)
* or use formats that make transformation difficult (such as those based on DDI)
Given the high-cost of questionnaire design, in the creation, testing and
deployment of final questionnaires a format that can reduce the cost in any or
all of these areas will have positive effects for researchers.

Furthermore, by providing researchers with the easy tools necessary to create
questionnaires they will consequently create structured metadata, thus reducing
the well understood documentation burden for archivists.

Structured questionnaire design

Last year, I wrote a paper “The Case Against the Skip Statement”, that
described the computational theory of questionnaire logic – namely the
structures used to describe skips and routing logic in questionnaires. This
paper was awarded 3rd place in the International Association of Official
Statistics ‘2013 Young Statistician Prize’ This paper
is awaiting publication, but can be made available for private reading on
request. It proposed that this routing logic in questionnaires is structurally
identical to that of computer programs. Following this assertion, it stated
that a higher-order language can be created that acts as a “high-level
questionnaire specification logic” that can be compiled to any questionnaire
platform, in much the same way that computer programming languages can be
compiled to machine language. Unfortunately, while some existing formats
incorporate some of the principles of Structured Questionnaire Design, they are
incomplete or too complex to provide the proposed benefits.

SQBL – The Structured (or Simple) Questionnaire Building Language

SQBL is an XML format that acts as a high-level language for
describing questionnaire logic. Small and simple, but powerful it incorporates
XML technologies to reduce the barrier to entry and make the description of
questionnaire specifications, even in raw XML readable. Underlying this
simplicity is a strict schema that enforces single solutions to problems,
meaning SQBL can be transformed into a format for any survey tool that has a
published specification.

Furthermore, because of its small schema and incorporation of XML and HTTP core
technologies, it is easier for developers to work with. In turn, this makes
survey design more comprehensible through the creation of easier tools, and
will help remove the need for costly, specialised instrument programmers
through automation.

Canard – the SQBL Question Module Editor

Announced alongside the Request of Comments of SQBl is an early beta release of
the SQBL-based Canard Question Module Editor Canard is
designed as a proof-of-concept tool to illustrate how questionnaire
specifications can be generated in an easy to use drag-and-drop interface. This
is achieved by providing designers with instant feedback on changes to
specifications through its 2 panel design that allows researchers to see the
logical specification, routing paths and example questionnaires all within the
same tool.

SQBL and other standards

SQBL is not a competitor to any existing standard, mainly because a structured
approach to questionnaire design based on solid theory has never been attempted
before. SQBL fills a niche that other standards don’t yet do well.

For example, while DDI can archive any questionnaire as is, this is because
of the loose structure necessary for being able to archive uncontrolled
metadata. However, if we want to be able to make questionnaire specifications
that can be used to drive processes, what is needed is the strict structure of

Similarly, SQBL has loose couplings to other information through standard HTTP
URIs allowing linkages to any networked standard. For example, Date Elements may
be described in a DDI registry, which a SQBL question can reference via its
DDI-URI. Additionally, to support automation a survey instrument described
inside a DDI Data Collection, rather than pointing to a DDI Sequence containing
the Instrument details can use existing linkages to external standards to point
to a SQBL document via a standard URL. Once data collection is complete,
harmonisation can be performed as each SQBL module has questions pointing to
variables, so data has comparability downstream.

SQBL in action

The SQBL XML schemas are available on GitHub that
also contains examples and files from video tutorials.
There is a website with more information on the format that
provides more information on some of the principles of Structured Questionnaire

If you don’t like getting your hands dirty with XML you can download the
Windows version of the Canard Question Module Editor from Dropbox and start producing questionnaire specifications
immediately. All that needs to be done is to unzip the file and run the file
named . Due to dependencies flowcharts may not be immediately
available, however this can be fixed by installing the free third-party
graphing tool Graphviz

Lastly, there is a growing number of tutorial videos on how to use Canard on Youtube.

Video 1 – Basic Questions (2:17 min)
Video 2 – Complex Responses (2:17 min)
Video 3 – Simple Logic (4:11 min)

There is also an early beta video that runs through creating an entire
questionnaire showing the side-by-side preview. (13:21 mins)

Joining the SQBL community

First of all there is a mailing list for SQBL hosted by Google Groups:!forum/sqbl.

Along with this each of the GitHub repositories, include issue trackers. Both Canard and SQBL are in
early design stages so there is an opportunity for feedback and input to ensure
both SQBL and Canard support the needs of all questionnaire designers.

Lastly, while there are initial examples of conversion tools to transform SQBL
into DDI-Lifecycle 3.1 and XForms, there is room for growth. Given the
proliferation of customised solutions to deploy both paper and web-forms there
is a need for developers to support the creation of transformations from SQBL
into formats such as Blaise, LimeSurvey, CASES and more.

If you have made it this far thank you for reading all the way through, and I
look forward to all the feedback people have to offer.

Cheers and I look forward to feedback now or at IASSIST,

Samuel Spencer.
SQBL & Canard Lead Developer
IASSIST Asia/Pacific Regional Secretary

Beginning the soft launch of SQBL and Canard

Over the past week I’ve start finalising a version of Canard and SQBL ready for early-Beta testing and public review ahead of IASSIST2013. While I’ll be putting together more documentation later in the week, the first of a series of short tutorials on how Canard will eventually be used.

Also, later this week will see the source code for Canard as shown in the below video released on GitHub, as well as a beta binary for easy of use during testing. For now the SQBL schemas can be seen on GitHub and the main SQBL website contains more information. For now, enjoy the two videos below to see how a strict structure can make questionnaire design easier than ever before!

Why I’ve chosen to make a new XML standard for questionnaires

XKCD #927

Normally I don’t like XKCD, but this is so true.

I’ve made no secret of the fact that I’ve been working on a new format for questionnaires. I recently registered a domain for the Structured Questionnaire Building Language, and have been releasing screenshots and a video of a new tool for questionnaire design that I’m working on. Considering that I’ll be covering this work at at least one conference this year, and given my close ties in a few technical communities I felt that it would be good to discuss why this is the case, and answer a few questions that people may have.

Why is a new format for questionnaire design necessary?

Over the past few years I’ve done a lot of research analysing how questionnaires are structured in a very generic sense. Given the simplistic nature of the logic traditionally found in paper and electronic questionnaires and their logical similarity to computer programming, I’ve theorised that it should be possible to use the same methods (and thus the same tools) to supports all questionnaires – including the oft ignored paper questionnaire. Unfortunately, attempts to improve questionnaires have focus on proprietary or limited use cases, which is why tools and formats such as Blaise, CASES and queXML exist, but generally only support telephone or web surveys. Likewise, all of these attempts have ignore the logical structure in various ways and discouraged questionnaire designers from becoming intimately, and necessarily familiar with the logic of their questionnaires.

SQBL on the other hand is an attempt at designing a specialised format to support the capture of the generic information that describes a questionnaire. Likewise, Canard is a parallel development of a tool to allow a researcher to quickly create this information, as a way to help them create their questionnaire, rather than just document it afterwards.

As a quick aside, if you are interested in this research on Structured Questionnaire Design, I’m still waiting publication, but if you email me directly, I’ll be glad to forward you as much as you care to read – and probably more.

Why not just use DDI?

Given the superficial overlap between SQBL and DDI, this is not an uncommon question even at this early stage. I’ve written previously that writing software for DDI isn’t easy, and when trying to write software that is user friendly, and can handle all of the edge cases that DDI does, and operate using the referential structures that make DDI so powerful its hard. Really hard. Given that a format is nothing without the tools to support it, I looked written a three part essay on how to extend DDI in the necessary ways to support complex questionnaires. However, even this is fraught with trouble as software that writes these extensions would have trouble reading “un-extended” DDI. What is needed is a tool that is powerful enough to capture the content required of well structured questionnaires, in a user-friendly way, and it seemed increasingly unlikely that this was possible in DDI.

A counterpoint is to also ask “why DDI?” DDI 2 and 3 are exemplary formats when looking at archival and discovery, however this is because both are very flexible, and can capture any and every possible use case – which is absolutely vital when working in an archive to capture what was done. However, when we turn this around and ask look at formats that can be predictably and reliably written and read what is needed is rigidity and strict structures. While such rigidity could be applied to DDI, it risks fracturing the user base leading to “archival DDI”, “questionnaire DDI” and who knows what else.

Thus the I deemed the decision to start again, with a strict narrow use case, uncomfortable but necessary.

What about DDI?

I did some soul searching on this (as much soul searching one can do around picking sides in a ‘standards war’), and realised that there really is no point in “picking sides”. SQBL isn’t perfect and isn’t yet complete, and more to the point it supports a very narrow use case. If I personally view DDI as an flexible archival format, there is a lot of work necessary to support conversion into and out of it to support discovery and reuse. Likewise, if I view SQBL as a rigid living format for creating questionnaires, the question becomes how to link this relatively limited content with other vital survey information. By definition SQBL has a limit useful timeframe, and once data has been collected (if not earlier) it is no longer necessary so conversion or linkages to other formats become required.

Some where between these overlaps is where DDI and SQBL will handshake, and perhaps in future standards this handshake will be formalised. Which means there is a lot of work on both sides of the fence, that I look forward to playing an active part. But in the interim, and for questionnaire design, I believe SQBL will prove to be a necessary new addition to the wide world of survey research standards.

Why are there so few survey design tools that use DDI?

Having been a close part of the DDI community for some time, and having attended a number of DDI focused conferences I have noticed a disturbing trend. There are relatively few content editors that use DDI. I have chosen this term very carefully, as there are a number of DDI Editors but these are tools whose primary function is to produce DDI XML. When I say a DDI-powered content editor I mean a tool with a limited use case that happens to use DDI as the storage format. As an example, we can look at Colectica – a leading DDI Editor. In this tool to create a survey with some pathing between questions, first I create a QuestionScheme, with some Questions, then I create an Instrument, which create for me a  ControlConstructScheme, then I can start pulling questions into this. If a new question needs to be made, I switch back to my QuestionScheme view, and make a new question, then switch back to the instrument and drag it in. While it is able to make perfectly valid DDI, this is not entirely how people think during this process. This is analogous to opening a Word processor to write a letter, and having to write an alphabetical list of words that I can then drag into the appropriate place in the document, rather than just typing away. But this isn’t on any part the fault of Colectica itself, but more the only way that an editor that uses DDI could feasibly be written.

To look at why this is, I want to examine two simple use cases that should be able to be done using a simple tool and have the corresponding data managed in DDI. Firstly, how does a survey designer go about reusing an existing question in their survey, and secondly, how does a survey designer create a new question inside of an existing survey instrument? Now to answer these questions I want to look at it from a uer interaction point of view, and pull out what a survey designer would have to do ensure that they have the bare minimum content needed to be ‘good’ DDI.

Use case 1: Reusing a question

One of the commonly stated advantages of DDI is the reusability of its managed content, so it should be the case that reusing a question is a relatively simple affair. For this use case, we picture a hypothetical user interface, where a survey designer wants to insert a new question into an existing sequence of questions. In DDI terms, they wish to insert a QuestionConstruct into a Sequence, not make a new QuestionItem in a QuestionScheme. So ideally the designer should need to:

  1. Search for a question using some search parameters
  2. If a suitable question is found, drag this question into the sequence.

However, this isn’t the case. First of all, the user interface needs to differentiate between the QuestionItem and the QuestionConstruct, as the QuestionConstruct is used to insert a question into a sequence by reference. So already we need the survey designer to understand DDI well enough to differentiate these objects. Secondly, if the needed QuestionConstruct doesn’t exist, this needs to be created by the user, which then necessitates that the user is prompted for the ControlConstructScheme that the new QuestionConstruct lives in. So what actually has to happen is this

  1. Search for a question using some search parameters
  2. If a suitable question is found, look at the list of QuestionConstructs (each with their own different contexts), and drag the appropriate one into the sequence. Nothing further needs to be done.
  3. If an appropriate QuestionConstruct doesn’t exist, create it with its own label and description.
  4. Prompt the user for where the QuestionConstruct should be maintained
  5. Search for a ControlConstructScheme using some search parameters, selecting the appropriate one.
  6. If none is found, create one with its own label, description, version, etc…

Here the simple act of reuse has tripled in size, now requiring the survey designer to understand more of the DDI model than necessary, as well as in many cases having to then become administratively responsible for further content than just their original survey content.

Use case 2: Creating a question

However this user interaction becomes much more complex when a user wants to add a new question. Again this should be a relatively simple affair, where a survey designer has made the decision that a new question needs to be created. In DDI terms, they wish to insert a QuestionConstruct into a Sequence, and create a new QuestionItem in a QuestionScheme . So ideally the designer should need to:

  1. Click to create a new question in the location needed.
  2. Add the corresponding information, such as question text, a label and description and intent.

Again however, this is far from how it would work using a DDI compatible tool.

  1. Click to create a new question in the location needed.
  2. Add the corresponding information, such as question text, a label and description and intent.
  3. Prompt the user for the QuestionScheme where the QuestionItem should be maintained.
  4. Search for a QuestionScheme using some search parameters, selecting the appropriate one.
  5. If none is found, create a QuestionScheme with its own label, description, version, etc…
  6. Create the necessary QuestionConstruct with the corresponding information, such a label and description.
  7. Prompt the user for where the QuestionConstruct should be maintained
  8. Search for a ControlConstructScheme using some search parameters, selecting the appropriate one.
  9. If none is found, create one with its own label, description, version, etc…

Here the act of simply adding in a new question is a 9 step process. It can be argued that not all of the steps are necessary, or that content for ‘unimportant metadata’ could be filled in at a later stage, but this means that objects remain empty for an indeterminate amount of time or relies on conventions to hide information from users, e.g. A QuestionItem can only link to one QuestionConstruct so they can be treated as ‘the same’. However, while valid DDI, this violates the ‘spirit of the standard’.

Why is this important?

Ultimately, users and their tools make or break a standard, if no one can write DDI, or write tools that write DDI, or write tools that people want to use, then the very purpose of the standard is called into question. But the wider implication is this, the reuse of content stored as DDI is contingent on its reuse, but it must initially come from somewhere.  Perhaps in its current state DDI can be made to work for post-hoc research archivists. However, it is still lacking as a living standard where it can be used through the survey lifecycle simply due to the over engineered state.

How can this be resolved?

Firstly, by drastically simplifying the content requirements and referential structure in DDI, and this will be achieved by talking with users and determining their needs. Archivists, survey researchers and central bankers will all have very different needs from each other as they all do wildly different things. While its not infeasible that one standard could meet their needs, it comes from identifying their needs first. As a first step I offer this as an opening question: Does anyone actually want to reuse just a single question? I ask this as in my limited experience, I’ve seen that people really just want to be able to reuse large modules of questions, a limited number of questions with their own internal logic can be reused across a number of areas. It will probably come to mind that the question of ‘Sex’ is reused across almost any population research, but the rebuttal is does anyone ever ask Sex, but not Age?

The DDI Identity Crisis and how to solve it – Part 1 : Versions and Identifiers

This is a 2 part post that examines the the different classes of identifiable object in DDI, and offers critiques for their current design and possible improvements to the standard with the aim to simplify the model and (hopefully) improve the uptake of people using the standard. But first we need to have a quick look at what the 3 different classes of identifiable object in DDI are and where they are used, in an increasing order of complexity:

  1. Non-identifiable – We’ll include this as the ‘base’ case of any DDI object that isn’t capture by those above. These objects are mostly used to capture basic metadata concepts, such as labels or descriptions for more complex objects.
  2. Identifiable – Objects that only require an ID attribute. These are mostly basic metadata, and below I’ll show the shady distinction between identifiables and non-identifiables being blurry and why these objects probably don’t need identifiers at all.
  3. Versionable – A level above identifiables, these require a version and an ID. This is probably the most commonly encountered type of core attribute, as they comprise the bulk of the survey objects people are used to dealing with – such as questions, variables and codelists. Further down I talk about how these objects don’t need a version, along with the administrative burden it adds – without a clear benefit.
  4. Maintainable – The most complex identifier – with an ID, a version and a reference to a maintainance agency. Maintainable objects are mostly used as either container objects, such as schemes, resource packages or groups; or high-level and survey wide objects such as Study Units or Archival objects. In the following post I’ll show how they are currently managed, and how they can be better managed as XML objects to simplify RESTful interfaces for DDI.

Identifiable objects don’t need identifiers

Identifiable objects are the subset of all objects within DDI that have only an ID, but no version or agency. In DDI, since ID attributes are only required to be local to the parent maintainable, this means that the reference an identifiable, its ID isn’t enough, you also needs the ID of the parent object as well! So while an identifiable can be referenced, to access it, it is necessary to first identify and gather the parent resource.

This becomes  interesting when we examine the list of objects which are only identifiable (not versionable or maintainable), shown below:


All of these objects constitute (at least to my mind) very basic, textual and contextal dependent metadata. Concepts like an ‘abstract’ or ‘purpose’ only really make sense given the context of what you are summarizing. This is reinforced by the fact that this information can only be gathered by finding the object you are summarising first, before getting this information.

Which leads us to ask – what make identifiables different to non-identifiables? In my opinion, nothing – its a distinction made on convenience. Again, in my opinion, identifiables exist because Notes exist. Because the methods for extending and improving DDI were not made more obvious to early adopters, DDI Notes have become the most common way to annotate objects, and given the referential nature of Notes, this requires objects to have identities.

The solution: Remove IDs from identifiables – If Notes are deprecated as a solution, IDs on identifiers are no longer needed and there is no other reason to identify them and they can be scaled back to the ‘non-identifiable’ class of object.

 Versionable objects shouldn’t have versions

Versionable objects are the set of objects that have both an ID and a version, and (as the DDI User Guide states) “are elements for which changes in content are important to note.” However, both versions and maintainables have a version, that supports the tracking of changes to an object. This causes a very interesting problem to occur when dealing with objects in practice – the identifiers of objects can change, without them having changed at all!

Lets look at an example, with a maintainable QuestionScheme called QS1 with version 1, and two versionable Questions, Q1 and Q2, both on version 1 as well. Since the full identifier for a versionable is also comprised of its parent, the full ID for the most recent version of Q1 takes a form similar to QS1:V1|Q1:V1, simple enough. A problem arises when Q2 is changed to be version 2. Technically, since Q2 is a child of the QuestionScheme QS1, it has also changed.

Now, the complexity is that QS1 has changed, so the full ID for the most recent version of Q1 has now changed to, QS1:V2|Q1:V1. Which leads to the academic question – if Question Q1’s parent has changed, has Q1 itself also changed, meaning that to be apart of the updated parent it also needs a new version?

The discussion to resolve this problem with DDI versionables has actually been kicking around for quite a while, but again the solution for this is pretty clear as the section header states. The first thing to recognise is that all versionable objects are already versioned by their parent object, so strictly speaking, given only the full ID for the parent, and the ID of a current versionable, it is possible to identify a single object for the simple fact that all IDs on objects must be unique within their parent maintainable.

So by removing the version from versionables, and relegating them to instead be identifiables we simplify the model for abstract types in DDI is reduced to two classes, with very clear intentions. In the new model identifiables are objects which are reused through references within other objects to construct rich, linked metadata constructs, while Maintainables are the versioning objects that are used by agencies to administer cross-survey and cross-cycle metadata holdings.

However, as we’ll see in the next post, this change actually helps us take advantage of a number useful XML technologies to simplify the learning process for DDI, for implementers and developers alike.

Next up: How Maintainables aren’t properly maintained

In the next post, I’ll cover how to simplify the DDI XSD Schemas to take advantage of XML identities by removing inline schemes and restricting base elements to simplify identification and URI design, so DDI can utilise URLs and XML fragments to precisely define objects for RESTful interfaces.

When DDI isn’t enough Part 2 – XSI Type and DDI

So a colleague left a comment on the last post of extending DDI that brought my attention to the use of XSI:Type extensions to XML elements, that for lack of a better term make my last post look like childs’ play! After having a quick look, this technique can basically be used to make additions to practically every part of an XML-based data model – such as DDI. The important question is how does it work?

When we add an element is definition is implicitly determined by its namespace and element. This definition tells us  exactly what attributes and elements are required or optional. What we can do, is add an explicit type to the element that allows us to add an extended definition to the element.

For example, in the last post, there is a demonstration of an Extended Conditional Text object that includes default and static text options. The downside of this is that a tool that handles the basic (non-extended) DDI 3.1 schema would not be able to use this content as it is, for all intents and purposes, hidden. An alternative approach is to use the ExtendedConditonalTextType we defined in the previous blog post, and instead of creating a new element, declare our standard DDI ConditionalText to be an extension of this within the XML, like so:

<d:ConditionalText xsi:type="xd:ConditionalText" xmlns:xd="ddi:ExtendedDataCollection:3_1">
        <r:Code programmingLanguage="Pseudocode">if sex == 'Male' {return 'he'} else if sex == 'Female' {return 'she'} else {return 'they'}</r:Code>

What this achieves is the ability to add(1) additional elements to the ConditionalText, without having to create a new element. Any software that can process an element of this type can continue to work, without having to accomodate any changes, and any additional elements will be (or should be ignored).

As a second example of an extension thats already being used we will look at Algenta’s Colectica tool, which is probably the leading DDI Editor available. This software introduced the ability to document the approximate time taken to complete a question. While this “time taken” content is being add to the DDI 3.2 specification, in DDI 3.1, this information is currently stored as a Note, making management and distribution of this information difficult (we will cover why Notes are difficult to manage in the next section of this now 3-part tutorial).

An alternative approach is through the creation of a new XML Schema complex type combined with the use of a similar XSI:Type extension. Below is an example of the XML Schema required to describe the additional element required.

Here we see the declaration of the element type, as well as its extension and lastly the new element <ApproximateTimeToComplete>. Its important to note that rather than having a basic numeric string for seconds or minutes, we are reusing the XML data type, xs:duration - an implement of the duration portion of the ISO 8601 Date Time standard.

When we combine these we get a QuestionItem that looks similar to that below:

<d:QuestionItem id="exampleQuestion" xsi:type="xd:QuestionItemWithTimeTaken">
            <d:Text>You told me your dog likes to play fetch, what does </d:Text>
        <d:ConditionalText xsi:type="xd:ExtendedConditionalTextType">
                <r:Code programmingLanguage="Pseudocode">if sex == 'Male' {return 'he'} else if sex == 'Female' {return 'she'} else {return 'they'}</r:Code>

When this is all put together, we get an XML fragment, that can be widely understood by DDI compliant software, but also contains additional metadata necessary for specific agencies or applications.

Just like last time, the full code for the above examples is available on pastebin – with the Extensions schema, and the example DDI Instance both available for review. In the next post I’ll go over each of these two approaches and cover their advantages, pitfalls, and when to use each – as well as covering why with both of these approaches, why Notes are unnecessary and what implications this has for the standard in general.


  1. As of yet I haven’t figure out how to remove elements (or if it is even possible) … I wouldn’t hold your breath for this one.