Archive for the ‘ Metadata ’ Category

Gamification in metadata creation – how do we show “quality” and encourage improvement?

Encouraging the creation of good metadata can be a challenging exercise. Systems for metadata creation need to allow for blank fields to allow incremental or in progress content to be saved, however some fields may be semantically recommended or mandatory for standardisation. So, while a metadata editing tool needs to be flexible to allow evolving content, it also needs to provide feedback to drive improvement.

This leads to three questions:

  1. Can we automatically measure metadata quality?
  2. Can we use this data to encourage metadata editors to more actively participate and improve content?
  3. How can we best show the “quality” of metadata to an editor to achieve better content?

Gamification is a recognised term for encouraging changes in user behaviour through small incentives. The question I’d like to pose for Aristotle is, how can these principles be used to encourage the creation of good metadata. Obviously, ‘good’ is very subjective, but for metadata objects such as a Data Elements, at the bare minimum having an attached “Data Element Concept” and “Value Domain” is a prerequisite for a quality item. Likewise, a piece of metadata with a description is good, but a piece of metadata with a longer description is probably better (but not always which leads to further challenges).

For the moment, let’s assume that basic metrics for “good” metadata can be constructed and applied to a piece of metadata, and that these can be summed together to give a raw score based on the possible sum. This assumption means we can grade metadata completion and get a raw score like “77 passes out of a possible 82″. From these we can derive all sorts of graphics or figures that can influence user behaviour, and it’s that which I’m interested in right now.

First of all, from these figures we can derive a percentage or rank – 77 out of 82 is about 94%, “9/10″, “4.5/5″ or “A-“. This may mean a metadata item has all its relations, all fields are filled out, but one or two are a little shorter than our metrics would like. Perhaps though, the item is described perfectly – adding more text in this case is worse.

Secondly, there is the issue of how to present this visually once we’ve determined a score. There are probably many ways to present this, but for now I want to focus on two – symbols and progress bars. A symbol can be any repeated small graphic, but the best example is a star ranking where metadata is given a rank out of 5 stars.

Once a raw score is computed, we can then normalise this to a score out of 5 and show stars (or other symbols). However, initial discussions suggest that this presents a more abrupt ranking that discourages work-in-progress, rather than displaying further work to do.

An alternative is the use of progress bars to show the completion of an item. Again, this is computed from the raw score and normalised to a percent and then shown to the user. The images show different possible options including a percentage complete, an integer or rounded decimal rankings out of 10. Again, initial discussions suggest that percentages may encourage over work, where users on 94% might strive for 100% by ‘gaming the system’, opposed to users with metadata ranked 9.5/10. For example, if a metadata item has a well written short description but is under a predefined limit resulting in a score of 94%, we need to design a pattern to discourage an editor from ‘adding’ content free text to ‘score’ 100%. The use of colour is also a possible way to gauge progress analogous to stars that many users are familiar with, but raises the questions of how to define ‘cut-offs’ for quality.

Metadata quality tracking in practice

In this section we look at a number of possible options for presenting the quality metrics of metadata using the Aristotle Metadata Registry. At the moment these are just mock-ups, but sample work has shown that dynamic analysis of metadata to quantify “quality” is possible, so here we will address the matter of how to show this.

First of all, it is important to note that these quality rankings can be shown at every stage from first edit all the way through to final publication, so a one-size-fits-all approach may not be the best way. In the simple case, we can look at the difference between progress and a star rating. Alongside with all the basic details, metadata can be given a ranking right on the page as well as a status to give users immediate feedback on its fitness for use.

An example of how stars or bars may look for a user.

Secondly, we can look at simple presentation options. Here it’s important to note only one rating would be shown out of all the possible options. Stars offer fewer levels of granularity, and when coloured are bright and distracting. However, progress bars blend quite well, even when coloured, and give more options for embedding textual representations.

Lastly, we can see how these would look when a number of items are shown together. Using a ‘sparklines’ approach, we can use stars or bars to quickly highlight trouble spots in metadata when looking at a large number of items.


For the professional context of a registry, based on initial feedback, there are strengths in the progress bar style that make it more suited for use, however more feedback is required to make a conclusive argument.

Conclusion

This is intended to be be the first in a number of articles that address the issue of best practices in presenting “quality” in the creation of metadata and the implementation of these practices in the Aristotle Metadata Registry. As such, I welcome and encourage comments and feedback into this design process, both from Aristotle users and the broader community.

Key questions for feedback

  1. Question 1: How can we textually show a metadata quality rank to encourage more participation? Possible options: raw values (77/82), percent (94%), normalised (9/10), graded (A-) or something else?
  2. Question 2: How can we visually show metadata rank to encourage participation? Possible options: stars (or others symbols), progress bar, colours, text only or something else?
  3. Question 3: How do we positively encourage completion without adversely encouraging “gaming the system”?
  4. Future questions:
    How do we programmatically measure metadata quality?
    Based on a set of sub-components of quality how and when can we show a user how to improve metadata quality?

Adding advanced user tracking and security to the Aristotle Metadata Registry ecosystem

The Aristotle MetaData Registry is already built on the strong, secure web-framework provided by Django and includes a vast suite of tests that ensure the security of metadata at all stages of the data lifecycle.

But to enhance this, work has begun on a new extension to Aristotle, called Pythias, that incorporates additional user tracking and security to provide peace of mind when deploying large scale metadata systems.

Powered by django-axes and django-user-sessions this will give Aristotle site administrators the power to track login successes and failure, block access at an IP level, automatically lock out user accounts and block concurrent logins. It will also give users the ability to view current logins and remote logout from sessions.

Make sure your continuous testing is continuous

One of the key features of the Aristotle Metadata Registry is its extensive test suite. Every time code is checked-in, the test suites are run and about 20 minutes later I get a notification saying everything is fine… or so it should be.

I recently made a small change to the test suite, that altered no code, and just changed some of the reporting. This shouldn’t have changed how the last tests were run, so they should have completed without problems, but this wasn’t the case.

After a short investigation, I discovered that a library that is used in the Aristotle admin interface had changed in a big way. Unfortunately, I haven’t been able to work on Aristotle as frequently as I’d like over the past few months, so this had gone completely unnoticed. Since the test environment is rebuilt every time the test are run, it was using the most recent version, while my code depended on an earlier version.

Since Aristotle is still in beta, the result wasn’t disastrous, however it still highlights (for me at least) an issue with relying on a green tick in the test suite saying everything is alright – because while the tests might be alright at that point in time, its prone to change.

So if you have to put down a project for a few weeks, or longer, make sure to nudge your code periodically, just to make sure everything is still running ok.

As for how it was fixed, a short alteration to the requirements file got the tests passing again, and a newer version that incorporates the updated library will be coming shortly.

Django-spaghetti-and-meatballs now available on pypi and through pip

The title covers most of it: django-spaghetti-and-meatballs (a little library I was making for producing entity-relationship diagrams from django models) is now packaged and available on PyPI, which means it can be installed via pip super easily:

pip install django-spaghetti-and-meatballs

There is even documentation for django-spaghetti-and-meatballs available on ReadTheDocs, so its all super stable and ready to use. So get it while its still fresh!

There is a live demo on the Aristotle Metadata Registry site, or you can check out the static version below:

A sample erd

Two new projects for data management with django

I’ve recently been working on two new projects through work that I’ve been able to make open source. These are designed to make data and metadata management with Django much easier to do. While I’m not in a position to talk about the main work yet, I can talk about the libraries that have sprung out of it:

The first is the “Django Data Interrogator” which is a Django app for 1.7 and up that allows anyone to create tables of information from a database that stores django models. I can see this being is handy when you are storing lists of people, products or events and want to be able to produce ad-hoc reports similar to “People with the number of sales made”, “Products with the highest sales, grouped by region”. At this stage this is done by giving a list of relations from a base ‘class’, more information is available on the Git repo. I should give apologies to a more well known project with the same acronym – I didn’t pick the name, and will never acronymise this project.

The second is “Django Spaghetti and Meatballs” which is a tool to produce ERD-like diagrams from Django projects – that depending on the colors, and number of models ,looks kind of like a plate of spaghetti. Once given a list of Django apps, this mines the Django content types table and produces an interactive javascript representation, using the lovely VisJs library. This has been really useful for prototyping the database, as while Django code is very readable, as the number of models and cross-app connections grew, this gave us a good understanding of how the wider picture looked. The other big advantage is that this uses Python docstrings, Django help text and field definitions to produce all the text in the diagrams. The example below shows a few models in three apps: Django’s build in Auth models, and the Django notifications and revision apps:

A graph of django models

A sample plate of spicy meatballs – Ingredients: Django Auth, Notifications and Revisions

Request for comments/volunteers for the Aristotle Metadata Registry

This is a request for comments and volunteers for an open source ISO 11179 metadata registry I have been working on called the Aristotle Metadata Registry (Aristotle-MDR). Aristotle-MDR is a Django/Python application that provides an authoring environment for a wide variety of 11179 compliant metadata objects with a focus to being multilingual. As such, I’m hoping to raise interest around bug checkers, translators, experienced HTML and Python programmers and data modelers for mapping of ISO 11179 to DDI3.2 (and potentially other formats).

For the eager:

Background

Aristotle-MDR is based on the Australian Institute of Health and Welfare’s METeOR Registry, an ISO 11179 compliant authoring tool that manages several thousand metadata items for tracking health, community services, hospital and primary care statistics. I have undertaken the Aristotle-MDR project to build upon the ideas behind Meteor, and extend it to improve compliance with 11179, but to also allow for access and discovery using other standards, including DDI and GSIM.

Aristotle-MDR is build on a number of existing open source frameworks, including Django, Haystack, Bootstrap and jQuery which allows it to easily scale from mobile to desktop on the client side, and scale from small shared hosting to full-scale enterprise environments on the server side. Along with the in-built authoring suite is the Haystack search platform which allows for a range of searching solutions from enterprise search such as Solr or Elastisearch, to smaller scale search engines.

The goal of the Aristotle-MDR is to conform to the ISO/IEC 11179 standard as closely as possible, so while it has a limited range of metadata objects, much like the 11179 standard it allows for the easy extension and inclusion of additional items. Among those already available, are extensions for:

Information on how to create custom objects can be found in the documentation: http://aristotle-metadata-registry.readthedocs.org/en/latest/extensions/index.html

Due to the wide variety of needs for users to access information, there is a download extension API that allows for the creation of a wide variety of download formats. Included is the ability to generate PDF versions of content from simple HTML templates, but an additional module allows for the creation of DDI3.2 (at the moment this supports a small number of objects only): https://github.com/aristotle-mdr/aristotle-ddi-utils

As mentioned, this is a call for comments and volunteers. First and foremost I’d appreciate as much help as possible with my mapping of 11179 objects in DDI3.2 (or earlier versions), but also with the translations for the user interface – which is currently available in English and Swedish (thanks to Olof Olsson). Partial translations into other languages are available thanks to translations in the Django source code, but additional translations around technical terms would be appreciated. More information on how to contribute to translating is available on the wiki: https://github.com/aristotle-mdr/aristotle-metadata-registry/wiki/Providing-translations.

To aid with this I’ve added a few blank translation files in common languages. Once the repository is forked, it should be relatively straightforward to edit these in Github and send a pull request back without having to pull down the entire codebase. These are listed by ISO 639-1 code, and if you don’t see your own listed let me know and I can quickly pop a boilerplate translation file in.

https://github.com/aristotle-mdr/aristotle-metadata-registry/tree/master/aristotle_mdr/locale

If you find bugs or identify areas of work, feel free to raise them either by emailing me or by raising a bug on Github: https://github.com/aristotle-mdr/aristotle-metadata-registry/issues

Aristotle MetaData Registry now has a Github organisation

This weekends task has been upgrading Aristotle from a single user repository to a Github organisation. The new Aristotle-MDR organisation holds the main code for the Aristotle Metadata Registry, but alongside that it also has the DDI Utilities codebase and some additional extensions, along with the new “Aristotle Glossary” extension.

This new extension pulls the Glossary code base out of the code code to improve it status as a “pure” ISO/IEC 11179 implementation as stated in the Aristotle-MDR mission statement. It will also provide additional Django post-save hooks to provide easy look-ups from Glossary items, to any item that requires the glossary item in its definition.

If you are curious about the procedure for migrating an existing project from a personal repository to an organisation, I’ve written a step-by-step guide on StackExchange that runs through all of the steps and potential issues.

Aristotle-Metadata-Registry – My worst kept secret

About 6 months ago I stopped frequently blogging, as I began work on a project that was not quite ready for a wider audience, but today that period comes to a close.

Over the past year, I have been working on a new piece of open-source software – an ISO/IEC 11179 metadata registry. This originally began from my experiences working on the Meteor Metadata Registry, which gave me an in-depth understanding of the systems and governance issues around the management of metadata across large scale organisations. I believe Aristotle-MDR provides one of the closest open-source implementations of the information model of Part 6 and the registration workflows of Part 3, in an easy to use and install piece of open-source software.

In that time, Aristotle-MDR has grown to several thousand lines of code, most substantially over 5000 line of rigorously tested Python code, tested using a suit of over 500 regression tests, and rich documentation covering installation, configuration and extension. From a front-end perspective, Aristotle-MDR uses the Bootstrap, CKEditor and jQuery libraries to provide a seemless, responsive experience, the use of the Haystack search engine provides scalable and accurate search capability, while custom wizards encourage the discovery and reuse metadata at the point of content creation.

One of the guiding principles of Aristotle-MDR has been to not only model 11179 straight-forward fashion, but do so in a way that complies with the extension principles of the standard itself. To this end, while the data model of Aristotle-MDR is and will remain quite bare-bones, it provides a robust, tested framework on which extensions can be built. Already a number of such extensions are being built, including those for the management of datasets, questionnaires, and performance indicators and for the sharing of information in the Data Documentation Initiative XML Format.

In the last 12 months, I have learned a lot as a systems developer, had the opportunity to contribute to several Django-based projects and look forward to sharing Aristotle, especially at IASSIST 2015 where I aim to present Aristotle-MDR as a stable 1.0 release. In the interim, there is a demonstration server for Aristotle available, with two guest accounts and a few hundred example items for people to use, test and possibly break.

Why Linus Torvalds is wrong about XML

Linus Torvalds is one of the most revered figures in modern computer science and has made the kind of contributions to the world that I hope to achieve. However, given his global audience, his recent statements about XML give me pause for reflection.

I have worked with XML in a number of jobs, helped with the specification of international XML formats, written tutorials on their use, and even made my own XML format (with reason I might add). And I must say, in reply to Linus’s statement that

XML is the worst format ever designed

XML isn’t the problem, it is more that the problem is bad programmers. Computer Science is a broad field, not covering just the creation of programs, but also the correct specification of information for computation. The lack of appreciation for that second aspect has seen the recent rise of “Data Science” as a field – a mash of statistics, data management and programming.

While it is undenyable that many programmers write bad XML, this is because of poor understanding and discipline. One could equally say, people write bad code, lets stop them writing code. People will always make mistakes or cut corners, the solution is education, not reinventing the wheel.

Linus and the rest of the Subsurface team are well within their rights to use the data formats they choose, and I am eager to see what new formats he can design. But with that in mind, I will address some of the critiques of Linus and others about XML and point out their issues, followed by some handy tips for programmers looking at using XML.

XML should be human readable

I did the best that I could with XML, and I suspect the subsurface XML is about as pretty and human-readable as you can make that crap

CSV isn’t very readable, C, Perl and Python aren’t very human readable. What is “human-readable” is very subjective, as even English isn’t human-readable to non-English speakers.

Restricting ourselves to just technology, CSV isn’t very readable as for any non-trivial amount of data as the header will scroll off the top of the screen, and data will overflow onto the next line or outside the horizonal boundaries of the screen. One could argue that its possible in Excel, OpenOffice or using a VIm/Emacs plugin to lock the headers to the top of the screen – and now we have used a tool to overcome limitations in the format.

Likewise, the same can be said for computer code, code-folding, auto-completion of long function and variable names and syntax highlighting are all software features to overcome failures in the format and make the output more “human-readable”. Plain-text supports none of the above, yet no one would recommend using Notepad to write code for the lack of features.

Likewise, I would never, ever recommend writing XML in a non-XML editor. Auto-adding of closing tags, checking schema as you type, easy access to the schema via hotlinks from elements and attributes, and XPath query and replace are all vital functions of a good XML editor. All of these make writing XML much easier and approachable, and compared to code or CSV, a programmer should spend only as much time in an XML editor to understand the format to make writing XML in code easier.

While it can be said that a poor craftsman blames his tools, a good craftsman knows when to use the right tools as well.

XML files should standalone

This is most visible in this bug raised in Subsurface where it is stated that:

Subsurface only ever stores metric units. But our goal is to create files that make sense and can be read and understood without additional information.

Now, examination of a sample of the XML from subsurface shows a glaring contradiction. There is nothing in this file that says that units are in metric. The distance ‘m’ could equally stand for ‘miles’, and while the order of magnitude would make misinterpretation for a human hard, a dive computer with an incorrect understanding may miscalculate the required oxygen pressure leading to potential death. To accurately understand this file, I need to find the documentation, i.e additional information. The reason for schema is to explicitly describe a data file.

Additionally, because data is stored as “human-readable” strings, I could validly put in “thirty metres” instead of “30.0 m” as a depth. At this point the program might fail, but as someone writing the data elsewhere I’d have no reason why. Apart from being a description of the data, schema exists as a contract. If you say the data is of this form, then these are the rules you must conform to. When you are looking at sharing data between programs or organisations this ability to lean on a technical enforcement is invaluable as making “bad” data is that much harder.

XML shouldn’t need other formats

This is a tricky one, as when people think of XML, even if they have made a schema their mid stops there. XML isn’t just a format, its more a suit of related formats that can make handing and manipulating information easier.

Its worth noting that people have raised databases within that thread as an alternative – SQL is only a query language, but requires the formal Database Definition Language to describe the data and an engine to query over it. Likewise, HTML without CSS, Javascript or any number of programming and templating languages that power the web would be much less useful to the general public.

Similarly, isolating XML from XML schemas, mean your data has no structure. Isolating XML from XQuery and XPath mean you have no way of querying your data. Without XSLT there is no easy, declarative way to transform XML, and having done this with traditional languages and XSLT, the latter makes using and transforming XML much easier. Ultimately, using XML without taking advantage of many of the technologies that exist in the entire XML landscape is not using technologies to its best.

Tips for good XML

With all of that aside, XML like all technologies can be used poorly. However, when done well and documented properly, a good XML format with an appropriate schema can reduce errors and give vital metadata that gives data context and longevity. So I present a few handy tips for using XML well.

  1. Only use XML when appropriateXML is best suited to complex data, especially hierarchical data. As Linus (and others) points out in the linked thread tabular data is much better suited to CSV or more structured tablular formats, simple key values can be stored in ini files, and markup text can be done in HTML, Markdown or any number of other formats.
  2. Look for other formats.If you are thinking of using XML for your tool – stop and see what others have already done. The world doesn’t need another format, so if you are thinking of doing so you should have a very, very good reason to do so.
  3. Use a schema or doctypeIf you are chosing to make your own format, this is the most important point. If you chose to use XML, make a schema. How you choose to capture this Doctype, XSD Schema, Schematron, Relax NG is largely irrelevant. What is important is that your data format is documented. There are even tools that can automate creating schema stubs from documents, so there is no excuse not to. As stated an XML schema is the formal contract about what your data is and lets others know that if the data doesn’t conform to this format then it is broken.
  4. Use XML datatypesXML already has specifications for text, numeric, datetime and identification data. Use these as a starting point for your data.
  5. Store one type of data per field.While the difference between <dive duration="30:00 mins"> and <dive duration="30" durationUnit="mins"> is minimal, the former uses a single string for two pieces of data, while the latter uses two fields, a number and an enumerable, each storing one piece of data. An even better solution is using the XML duration data type <dive duration="PT30M"> based on the existing ISO 8601 standard.

A Request for Comments on a new XML Questionnaire Specification Format (SQBL)

This is an announcement and Request for Comments on SQBL a new
open-source XML format for the cross-platform development of questionnaire
specifications. The design decisions behind SQBL and additional details are the
subject of a paper to be presented in 2 weeks at the 2013 IASSIST conference in
Cologne, Germany:
– Do We Need a Perfect Metadata Standard or is “Good Enough” Good Enough?
http://www.iassist2013.org/program/sessions/session-c4/#c220
However, to ensure people are well-informed ahead time, I am releasing details
ahead to conference.

The gist

SQBL – The Structured (or Simple) Questionnaire Building Language is an
emerging XML format designed to allow survey researchers of all fields to
easily produce questionnaire specifications with the required structure to
enable deployment to any questionnaire platform – including, but not limited
to, Blaise, DDI, LimeSurvey, XForms and paper surveys.

The problem

Analysing the current state of questionnaire design and development shows that
there are relatively few tools available that are capable of allowing a survey
designer to easily create questionnaire specifications in a simple manner,
whilst providing the structure necessary to verify respondent routing and
provide a reliable input to the automation of questionnaire deployment.

Of the current questionnaire creations tools available, they either:
Prevent the sharing of content (such as closed tools like SurveyMonkey)
Require extensive programming experience (such as Blaise or CASES)
* or use formats that make transformation difficult (such as those based on DDI)
Given the high-cost of questionnaire design, in the creation, testing and
deployment of final questionnaires a format that can reduce the cost in any or
all of these areas will have positive effects for researchers.

Furthermore, by providing researchers with the easy tools necessary to create
questionnaires they will consequently create structured metadata, thus reducing
the well understood documentation burden for archivists.

Structured questionnaire design

Last year, I wrote a paper “The Case Against the Skip Statement”, that
described the computational theory of questionnaire logic – namely the
structures used to describe skips and routing logic in questionnaires. This
paper was awarded 3rd place in the International Association of Official
Statistics ‘2013 Young Statistician Prize’ http://bit.ly/IAOS2012. This paper
is awaiting publication, but can be made available for private reading on
request. It proposed that this routing logic in questionnaires is structurally
identical to that of computer programs. Following this assertion, it stated
that a higher-order language can be created that acts as a “high-level
questionnaire specification logic” that can be compiled to any questionnaire
platform, in much the same way that computer programming languages can be
compiled to machine language. Unfortunately, while some existing formats
incorporate some of the principles of Structured Questionnaire Design, they are
incomplete or too complex to provide the proposed benefits.

SQBL – The Structured (or Simple) Questionnaire Building Language

SQBL http://sqbl.org is an XML format that acts as a high-level language for
describing questionnaire logic. Small and simple, but powerful it incorporates
XML technologies to reduce the barrier to entry and make the description of
questionnaire specifications, even in raw XML readable. Underlying this
simplicity is a strict schema that enforces single solutions to problems,
meaning SQBL can be transformed into a format for any survey tool that has a
published specification.

Furthermore, because of its small schema and incorporation of XML and HTTP core
technologies, it is easier for developers to work with. In turn, this makes
survey design more comprehensible through the creation of easier tools, and
will help remove the need for costly, specialised instrument programmers
through automation.

Canard – the SQBL Question Module Editor

Announced alongside the Request of Comments of SQBl is an early beta release of
the SQBL-based Canard Question Module Editor http://bit.ly/CANARD. Canard is
designed as a proof-of-concept tool to illustrate how questionnaire
specifications can be generated in an easy to use drag-and-drop interface. This
is achieved by providing designers with instant feedback on changes to
specifications through its 2 panel design that allows researchers to see the
logical specification, routing paths and example questionnaires all within the
same tool.

SQBL and other standards

SQBL is not a competitor to any existing standard, mainly because a structured
approach to questionnaire design based on solid theory has never been attempted
before. SQBL fills a niche that other standards don’t yet do well.

For example, while DDI can archive any questionnaire as is, this is because
of the loose structure necessary for being able to archive uncontrolled
metadata. However, if we want to be able to make questionnaire specifications
that can be used to drive processes, what is needed is the strict structure of
SQBL.

Similarly, SQBL has loose couplings to other information through standard HTTP
URIs allowing linkages to any networked standard. For example, Date Elements may
be described in a DDI registry, which a SQBL question can reference via its
DDI-URI. Additionally, to support automation a survey instrument described
inside a DDI Data Collection, rather than pointing to a DDI Sequence containing
the Instrument details can use existing linkages to external standards to point
to a SQBL document via a standard URL. Once data collection is complete,
harmonisation can be performed as each SQBL module has questions pointing to
variables, so data has comparability downstream.

SQBL in action

The SQBL XML schemas are available on GitHub http://bit.ly/sqbl-schema that
also contains examples and files from video tutorials.
There is a website http://sqbl.org with more information on the format that
provides more information on some of the principles of Structured Questionnaire
Design.

If you don’t like getting your hands dirty with XML you can download the
Windows version of the Canard Question Module Editor from Dropbox
http://bit.ly/canardexe and start producing questionnaire specifications
immediately. All that needs to be done is to unzip the file and run the file
named . Due to dependencies flowcharts may not be immediately
available, however this can be fixed by installing the free third-party
graphing tool Graphviz http://www.graphviz.org/

Lastly, there is a growing number of tutorial videos on how to use Canard on Youtube.

Video 1 – Basic Questions http://www.youtube.com/watch?v=ijk00SqoBGk (2:17 min)
Video 2 – Complex Responses http://www.youtube.com/watch?v=d3Vrn2B4EO4 (2:17 min)
Video 3 – Simple Logic http://www.youtube.com/watch?v=GrAWbOF-UW8 (4:11 min)

There is also an early beta video that runs through creating an entire
questionnaire showing the side-by-side preview.
http://www.youtube.com/watch?v=_FImaXn7EYk (13:21 mins)

Joining the SQBL community

First of all there is a mailing list for SQBL hosted by Google Groups:
https://groups.google.com/forum/?fromgroups#!forum/sqbl.

Along with this each of the GitHub repositories http://bit.ly/sqbl-schema,
http://bit.ly/CANARD include issue trackers. Both Canard and SQBL are in
early design stages so there is an opportunity for feedback and input to ensure
both SQBL and Canard support the needs of all questionnaire designers.

Lastly, while there are initial examples of conversion tools to transform SQBL
into DDI-Lifecycle 3.1 and XForms, there is room for growth. Given the
proliferation of customised solutions to deploy both paper and web-forms there
is a need for developers to support the creation of transformations from SQBL
into formats such as Blaise, LimeSurvey, CASES and more.

If you have made it this far thank you for reading all the way through, and I
look forward to all the feedback people have to offer.

Cheers and I look forward to feedback now or at IASSIST,

Samuel Spencer.
SQBL & Canard Lead Developer
IASSIST Asia/Pacific Regional Secretary

http://about.me/legostormtroopr

http://au.linkedin.com/in/legostormtroopr/